ELK 6.7.2 No permissions for [indices:data/read/search] on .kibana_task_manager index

Hi,

I am trying to upgrade my Elastic 6.6.2 cluster to Elastic 6.7.2 and am getting errors about kibanaserver not having permissions for [indices:data/read/search] on the .kibana_task_manager index. The end result is that the Elasticsearch nodes cluster without issue but Kibana fails to start its service after the upgrade.

Elasticsearch log:
[2019-06-05T12:05:32,375][INFO ][c.f.s.p.PrivilegesEvaluator] [ip-xx-xx-xx-xxx] No index-level perm match for User [name=kibanaserver, roles=, requestedTenant=null] Resolved [aliases=, indices=[.kibana_task_manager], allIndices=[.kibana_task_manager], types=[_doc], originalRequested=[.kibana_task_manager], remoteIndices=] [Action [indices:data/read/search]] [RolesChecked [sg_kibana_server]]
[2019-06-05T12:05:32,376][INFO ][c.f.s.p.PrivilegesEvaluator] [ip-xx-xx-xx-xxx] No permissions for [indices:data/read/search]

Kibana log:
{“type”:“log”,“@timestamp”:“2019-06-05T17:05:26Z”,“tags”:[“error”,“task_manager”],“pid”:26788,“message”:“Failed to poll for work: [security_exception] no permissions for [indices:data/read/search] and User [name=kibanaserver, roles=, requestedTenant=null] :: {"path":"/.kibana_task_manager/_doc/_search","query":{"ignore_unavailable":true},"body":"{\"query\":{\"bool\":{\"must\":[{\"term\":{\"type\":\"task\"}},{\"bool\":{\"must\":[{\"terms\":{\"task.taskType\":[\"maps_telemetry\",\"vis_telemetry\"]}},{\"range\":{\"task.attempts\":{\"lte\":3}}},{\"range\":{\"task.runAt\":{\"lte\":\"now\"}}},{\"range\":{\"kibana.apiVersion\":{\"lte\":1}}}]}}]}},\"size\":10,\"sort\":{\"task.runAt\":{\"order\":\"asc\"}},\"seq_no_primary_term\":true}","statusCode":403,"response":"{\"error\":{\"root_cause\":[{\"type\":\"security_exception\",\"reason\":\"no permissions for [indices:data/read/search] and User [name=kibanaserver, roles=, requestedTenant=null]\"}],\"type\":\"security_exception\",\"reason\":\"no permissions for [indices:data/read/search] and User [name=kibanaserver, roles=, requestedTenant=null]\"},\"status\":403}"}”}

In researching this issue it seems that the fix is to configure sg_config.yml with dynamic.kibana.do_not_fail_on_forbidden=true, preventing the service from dying when it encounters that error. Is this still true in this scenario or is there a different approach to take?

elasticsearch.yml:
    # Generated by chef
#
---
cluster:
  name: elk_test_cluster
  routing:
    allocation:
      allow_rebalance: always
      disk:
        watermark:
          low: 90%
          high: 95%
path:
  data: "/data"
  logs: "/var/log/elasticsearch/"
network:
  host:
  - 192.168.10.10
  - localhost
http:
  port: '9200'
xpack:
  ml:
    enabled: false
  security:
    enabled: false
  http:
    ssl:
      verification_mode: none
  monitoring:
    enabled: true
  graph:
    enabled: true
  watcher:
    enabled: true
  notification:
    email:
      account:
        smtp_account:
          profile: standard
          smtp:
            host: localhost
searchguard:
  restapi:
    roles_enabled:
    - sg_all_access
  ssl:
    transport:
      pemcert_filepath: x-pack/elasticsearch.crt
      pemkey_filepath: x-pack/elasticsearch.key
      pemtrustedcas_filepath: x-pack/ca.crt
      enforce_hostname_verification: false
      resolve_hostname: false
    http:
      enabled: true
      pemcert_filepath: x-pack/elasticsearch.crt
      pemkey_filepath: x-pack/elasticsearch.key
      pemtrustedcas_filepath: x-pack/ca.crt
  enterprise_modules_enabled: false
  enable_snapshot_restore_privilege: true
  authcz:
    admin_dn:
    - CN=admin.elastictest.com,OU=Ops,O=elastictest Com\, Inc.,DC=elastictest,DC=com
  cert:
    oid: 1.2.3.4.5.5
node:
  name: test-master-centos-7
  master: true
  data: false
  ingest: false
thread_pool:
  bulk:
    queue_size: 1000
  search:
    queue_size: 10000
bootstrap:
  system_call_filter: false
gateway:
  expected_master_nodes: 1
discovery:
  zen:
    ping:
      unicast:
        hosts:
        - 192.168.10.10
    minimum_master_nodes: 1

kibana.yml
    # Generated by chef
#
---
server:
  host: 0.0.0.0
  ssl:
    enabled: true
    key: "/etc/kibana/kibana.key"
    certificate: "/etc/kibana/kibana.crt"
elasticsearch:
  url: https://localhost:9200
  username: kibanaserver_user
  password: kibanaserver_password
  ssl:
    certificateAuthorities: "/etc/kibana/ca.crt"
    key: "/etc/kibana/kibana.key"
    certificate: "/etc/kibana/kibana.crt"
    verificationMode: none
  requestHeadersWhitelist:
  - authorization
  - x-forwarded-for
  - x-forwarded-by
  - x-proxy-user
  - x-proxy-roles
xpack:
  security:
    enabled: false
  monitoring:
    enabled: true
  graph:
    enabled: true
  spaces:
    enabled: false
  infra:
    enabled: false
  reporting:
    enabled: true
    encryptionKey: bySecureCloud
    csv:
      maxSizeBytes: '20971520'
    kibanaServer:
      port: '5601'
      protocol: https
logging:
  dest: "/var/log/kibana/kibana.log"
  verbose: false
pid:
  file: "/var/run/kibana/kibana.pid"
searchguard:
  allow_client_certificates: true
  basicauth:
    enabled: false
  cookie:
    secure: true
    password: somereallylongpasswordwithmorethanthirtytwocharactersinit
console.proxyConfig:
- ssl.verify: false

sg_action_groups.yml
    UNLIMITED:
  readonly: true
  permissions:
    - "*"

###### INDEX LEVEL ######

INDICES_ALL:
  readonly: true
  permissions:
    - "indices:*"

# for backward compatibility
ALL:
  readonly: true
  permissions:
    - INDICES_ALL

MANAGE:
  readonly: true
  permissions:
    - "indices:monitor/*"
    - "indices:admin/*"

CREATE_INDEX:
  readonly: true
  permissions:
    - "indices:admin/create"
    - "indices:admin/mapping/put"

MANAGE_ALIASES:
  readonly: true
  permissions:
    - "indices:admin/aliases*"

# for backward compatibility
MONITOR:
  readonly: true
  permissions:
    - INDICES_MONITOR

INDICES_MONITOR:
  readonly: true
  permissions:
    - "indices:monitor/*"

DATA_ACCESS:
  readonly: true
  permissions:
    - "indices:data/*"
    - CRUD

WRITE:
  readonly: true
  permissions:
    - "indices:data/write*"
    - "indices:admin/mapping/put"

READ:
  readonly: true
  permissions:
    - "indices:data/read*"
    - "indices:admin/mappings/fields/get*"

DELETE:
  readonly: true
  permissions:
    - "indices:data/write/delete*"

CRUD:
  readonly: true
  permissions:
    - READ
    - WRITE

SEARCH:
  readonly: true
  permissions:
    - "indices:data/read/search*"
    - "indices:data/read/msearch*"
    - SUGGEST

SUGGEST:
  readonly: true
  permissions:
    - "indices:data/read/suggest*"

INDEX:
  readonly: true
  permissions:
    - "indices:data/write/index*"
    - "indices:data/write/update*"
    - "indices:admin/mapping/put"
    - "indices:data/write/bulk*"

GET:
  readonly: true
  permissions:
    - "indices:data/read/get*"
    - "indices:data/read/mget*"

###### CLUSTER LEVEL ######

CLUSTER_ALL:
  readonly: true
  permissions:
    - "cluster:*"

CLUSTER_MONITOR:
  readonly: true
  permissions:
    - "cluster:monitor/*"

CLUSTER_COMPOSITE_OPS_RO:
  readonly: true
  permissions:
    - "indices:data/read/mget"
    - "indices:data/read/msearch"
    - "indices:data/read/mtv"
    - "indices:admin/aliases/exists*"
    - "indices:admin/aliases/get*"
    - "indices:data/read/scroll"

CLUSTER_COMPOSITE_OPS:
  readonly: true
  permissions:
    - "indices:data/write/bulk"
    - "indices:admin/aliases*"
    - "indices:data/write/reindex"
    - CLUSTER_COMPOSITE_OPS_RO

MANAGE_SNAPSHOTS:
  readonly: true
  permissions:
    - "cluster:admin/snapshot/*"
    - "cluster:admin/repository/*"

sg_config.yml
    # Generated by chef
#
---
searchguard:
  dynamic:
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: true
        internalProxies: ".*"
        remoteIpHeader: x-forwarded-for
        proxiesHeader: x-forwarded-by
        trustedProxies: ".*"
    authc:
      proxy_auth_domain:
        http_enabled: true
        order: 0
        http_authenticator:
          type: proxy
          challenge: false
          config:
            user_header: x-proxy-user
            roles_header: x-proxy-roles
        authentication_backend:
          type: noop
      basic_internal_auth_domain:
        enabled: true
        order: 1
        http_authenticator:
          type: basic
          challenge: true
      authenication_backend:
        type: intern

sg_internal_users.yml
# This is the internal user database
# The hash value is a bcrypt hash and can be generated with plugin/tools/hash.sh

#password is: admin
admin:
  readonly: true
  hash: $2y$12...
  roles:
    - admin

logstash:
  hash: $2y$12...
  roles:
    - logstash
    - xp_monitoring

kibanaserver:
  readonly: true
  hash: $2y$12...

curator:
  hash: $2y$12...
  roles:
    - admin

snapshotrestore:
  hash: $2y$12...
  roles:
    - snapshotrestore

sg_roles_mapping.yml
    # Generated by chef
#
# In this file users, backendroles and hosts can be mapped to Search Guard roles.
# Permissions for Search Guard roles are configured in sg_roles.yml

sg_all_access:
  readonly: true
  backendroles:
    - admin
    - dc_power_user

sg_logstash:
  backendroles:
    - logstash

sg_kibana_server:
  readonly: true
  users:
    - kibanaserver

sg_kibana_user:
  backendroles:
    - cloud_power
    - skyblue_power_user
    - mixology_power_user

sg_alerting:
  backendroles:
    - cloud_power
    - skyblue_power_user
    - mixology_power_user

sg_curator:
  users:
    - curator

sg_snapshot_restore:
  users:
    - admin

sg_manage_snapshots:
  readonly: true
  backendroles:
    - snapshotrestore

sg_xp_monitoring:
  backendroles:
    - xp_monitoring

sg_roles.yml
    # Generated by chef
#
#<sg_role_name>:
#  cluster:
#    - '<permission>'
#  indices:
#    '<indexname or alias>':
#      '<type>':
#        - '<permission>'
#      _dls_: '<dls query>'
#      _fls_:
#        - '<field>'
#        - '<field>'

# When a user make a request to Elasticsearch then the following roles will be evaluated to see if the user has
# permissions for the request. A request is always associated with an action and is executed against and index (or alias)
# and a type. If a request is executed against all indices (or all types) then the asterix ('*') is needed.
# Every role a user has will be examined if it allows the action against an index (or type). At least one role must match
# for the request to be successful. If no role match then the request will be denied. Currently a match must happen within
# one single role - that means that permissions can not span multiple roles.

# For <permission>, <indexname or alias> and <type> simple wildcards and regular expressions are possible.
# A asterix (*) will match any character sequence (or an empty sequence)
# A question mark (?) will match any single character (but NOT empty character)
# Example: '*my*index' will match 'my_first_index' as well as 'myindex' but not 'myindex1'
# Example: '?kibana' will match '.kibana' but not 'kibana'

# To use a full blown regex you have to pre- and apend a '/' to use regex instead of simple wildcards
# '/<java regex>/'
# Example: '/\S*/' will match any non whitespace characters

# Important:
# Index, alias or type names can not contain dots (.) in the <indexname or alias> or <type> expression.
# Reason is that we currently parse the config file into a elasticsearch settings object which cannot cope with dots in keys.
# Workaround: Just configure something like '?kibana' instead of '.kibana' or 'my?index' instead of 'my.index'
# This limitation will likely removed with Search Guard 6

# DLS (Document level security) - NOT FREE FOR COMMERCIAL
# http://docs.search-guard.com/v6/document-level-security

# FLS (Field level security) - NOT FREE FOR COMMERCIAL
# http://docs.search-guard.com/v6/field-level-security

# Kibana multitenancy - NOT FREE FOR COMMERCIAL
# http://docs.search-guard.com/v6/kibana-multi-tenancy

# Allows everything, but no changes to searchguard configuration index
sg_all_access:
  readonly: true
  cluster:
    - UNLIMITED
  indices:
    '*':
      '*':
        - UNLIMITED
  tenants:
    admin_tenant: RW

# Read all, but no write permissions
sg_readall:
  readonly: true
  cluster:
    - CLUSTER_COMPOSITE_OPS_RO
  indices:
    '*':
      '*':
        - READ

# Read all and monitor, but no write permissions
sg_readall_and_monitor:
  cluster:
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS_RO
  indices:
    '*':
      '*':
        - READ

# For users which use kibana, access to indices must be granted separately
sg_kibana_user:
  readonly: true
  cluster:
    - MONITOR
    - CLUSTER_COMPOSITE_OPS
    - "indices:data/read/scroll*"
  indices:
    '?kibana':
      '*':
        - MANAGE
        - INDEX
        - READ
        - DELETE
    '?kibana-6':
      '*':
        - MANAGE
        - INDEX
        - READ
        - DELETE
    '?kibana_*':
      '*':
        - MANAGE
        - INDEX
        - READ
        - DELETE
    '?tasks':
      '*':
        - INDICES_ALL
    '?management-beats':
      '*':
        - INDICES_ALL
    '*':
      '*':
        - indices:data/read/field_caps*
        - indices:data/read/xpack/rollup*
        - indices:admin/mappings/get*
        - indices:admin/get

# For the kibana server
sg_kibana_server:
  readonly: true
  cluster:
      - CLUSTER_MONITOR
      - CLUSTER_COMPOSITE_OPS
      - cluster:admin/xpack/monitoring*
      - indices:admin/template*
  indices:
    '?kibana':
      '*':
        - INDICES_ALL
    '?kibana-6':
      '*':
        - INDICES_ALL
    '?kibana_*':
      '*':
        - INDICES_ALL
    '?reporting*':
      '*':
        - INDICES_ALL
    '?monitoring*':
      '*':
        - INDICES_ALL
    '?tasks':
      '*':
        - INDICES_ALL
    '?management-beats*':
      '*':
        - INDICES_ALL
    '*':
      '*':
        - "indices:admin/aliases*"

sg_snapshot_restore:
  cluster:
    - cluster:admin/repository/put
    - cluster:admin/repository/get
    - cluster:admin/snapshot/status
    - cluster:admin/snapshot/get
    - cluster:admin/snapshot/create
    - cluster:admin/snapshot/restore
    - cluster:admin/snapshot/delete
  indices:
    '*':
      '*':
        - indices:data/write/index
        - indices:admin/create

# For logstash and beats
sg_logstash:
  cluster:
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS
    - indices:admin/template/get
    - indices:admin/template/put
  indices:
    'logstash-*':
      '*':
        - CRUD
        - CREATE_INDEX
    '*beat*':
      '*':
        - CRUD
        - CREATE_INDEX

# For the curator user
sg_curator:
  cluster:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  indices:
    '*':
      '*':
      - UNLIMITED

# Allows adding and modifying repositories and creating and restoring snapshots
sg_manage_snapshots:
  cluster:
    - MANAGE_SNAPSHOTS
  indices:
    '*':
      '*':
        - "indices:data/write/index"
        - "indices:admin/create"

# Allows each user to access own named index
sg_own_index:
  cluster:
    - CLUSTER_COMPOSITE_OPS
  indices:
    '${user_name}':
      '*':
        - INDICES_ALL

### X-Pack COMPATIBILITY
sg_xp_monitoring:
  readonly: true
  cluster:
    - cluster:monitor/xpack/info
    - cluster:monitor/main
    - cluster:admin/xpack/monitoring/bulk
  indices:
    '?monitor*':
      '*':
        - INDICES_ALL

sg_xp_alerting:
  readonly: true
  cluster:
    - indices:data/read/scroll
    - cluster:admin/xpack/watcher*
    - cluster:monitor/xpack/watcher*
  indices:
    '?watches*':
      '*':
        - INDICES_ALL
    '?watcher-history-*':
      '*':
        - INDICES_ALL
    '?triggered_watches':
      '*':
        - INDICES_ALL
    '*':
      '*':
        - READ
        - indices:admin/aliases/get

sg_xp_machine_learning:
  readonly: true
  cluster:
    - cluster:admin/persistent*
    - cluster:internal/xpack/ml*
    - indices:data/read/scroll*
    - cluster:admin/xpack/ml*
    - cluster:monitor/xpack/ml*
  indices:
    '*':
      '*':
        - READ
        - indices:admin/get*
    '?ml-*':
      '*':
        - "*"


### LEGACY ROLES, FOR COMPATIBILITY ONLY
### WILL BE REMOVED IN SG7, DO NOT USE ANYMORE

sg_readonly_and_monitor:
  cluster:
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS_RO
  indices:
    '*':
      '*':
        - READ

# Make xpack monitoring work
sg_monitor:
  cluster:
    - cluster:admin/xpack/monitoring/*
    - cluster:admin/ingest/pipeline/put
    - cluster:admin/ingest/pipeline/get
    - indices:admin/template/get
    - indices:admin/template/put
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS
  indices:
    '?monitor*':
      '*':
        - INDICES_ALL
    '?marvel*':
      '*':
        - INDICES_ALL
    '?kibana*':
      '*':
        - READ
    '*':
      '*':
        - indices:data/read/field_caps

# Make xpack alerting work
sg_alerting:
  cluster:
    - indices:data/read/scroll
    - cluster:admin/xpack/watcher/watch/put
    - cluster:admin/xpack/watcher*
    - CLUSTER_MONITOR
    - CLUSTER_COMPOSITE_OPS
  indices:
    '?kibana*':
      '*':
        - READ
    '?watches*':
      '*':
        - INDICES_ALL
    '?watcher-history-*':
      '*':
        - INDICES_ALL
    '?triggered_watches':
      '*':
        - INDICES_ALL
    '*':
      '*':
        - READ

Hi,

yes, when using Kibana with Search Guard, you should always set do_not_fail_on_forbidden to true:

(We should probably add a check and issue a warning if this is not the case.)

However, the behaviour you reported should not happen, even if dnfof is set to false. There might be an issue with the permission schema of the shipped “kibanaserver” role. We will investigate this.

Thanks for reporting!

Thanks for the quick response.

I will go with that setting and see if that resolves the issue during deploy. If not, I will post any issues encountered here. Otherwise I would be interested to know the outcome of the permission schema findings.

Thanks again!

The role definitions seem ok. According to the config posted, the sg_kibana_server role has access to the .kibana_task_manager index:

'?kibana_*':
  '*':
    - INDICES_ALL

The role definition has not been changed between 6.6.2 and 6.7.x, so there should be nothing to re-configure.

According to the error message also the role mapping seems correct, so the kibanaserver user has indeed the SG role sg_kibana_server.

In other words, this error should not happen :wink:

You could check if the configuration in the running cluster is indeed identical with the posted configs by getting a backup of the config:

And then check if the sg_kibana_server role against the one you posted.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.