7.0 Upgrade assistant cluster issue needs to be resolved

Followed instructions here Upgrading from 6.x to 7.x | Security for Elasticsearch | Search Guard to upgrade from 6.8 to 7.3. The local admin user was able to login and see the Kibana dashboard, etc. The LDAP user from the authorized group (Elk-Admins) was presented with a 403 error in json format in plain text. I was able to move past this by modifying the sg_roles.yml to what is below.

Now, the admin is presented with a 403 when attempting to save changes within Advanced Settings in the Kibana dashboard. It looks like two tenants were created during the upgrade process. According to the log message, the tenant SGS_GLOBAL_TENANT is the issue.

Tenant SGS_GLOBAL_TENANT is not allowed to write (user: LDAP User)

Initially, the authinfo looked like this:

  "sg_tenants" : {
    "LDAP User" : true,
    "SGS_GLOBAL_TENANT" : false,
    "admin_tenant" : false

but, I was able to add admin_tenant to the role and that resulted in:

  "sg_tenants" : {
    "LDAP User" : true,
    "SGS_GLOBAL_TENANT" : false,
    "admin_tenant" : true

Unfortunately, because the error is related to SGS_GLOBAL_TENANT, it doesn’t change the issue that I am experiencing. Debug log from elasticsearch when the 403:

[2019-08-06T09:40:03,243][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] sgr: [sg_own_index, Elk-Admins]
[2019-08-06T09:40:03,243][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] raw requestedTenant: 'null'
[2019-08-06T09:40:03,243][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] request class org.elasticsearch.action.update.UpdateRequest
[2019-08-06T09:40:03,243][WARN ][c.f.s.c.PrivilegesInterceptorImpl] [hostname] Tenant SGS_GLOBAL_TENANT is not allowed to write (user: LDAP User)

I’m hoping someone can give direction on this one rather than having to go by trial and error. Relevant configuration files below:

Search Guard configuration files
sg_action_groups.yml:

---
_sg_meta:
  type: "actiongroups"
  config_version: 2
CLUSTER_ALL:
  reserved: true
  hidden: false
  allowed_actions:
  - "cluster:*"
  type: "cluster"
  description: "Migrated from v6"
  static: false
ALL:
  reserved: true
  hidden: false
  allowed_actions:
  - "INDICES_ALL"
  type: "index"
  description: "Migrated from v6"
  static: false
CRUD:
  reserved: true
  hidden: false
  allowed_actions:
  - "READ"
  - "WRITE"
  type: "index"
  description: "Migrated from v6"
  static: false
SEARCH:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/read/search*"
  - "indices:data/read/msearch*"
  - "SUGGEST"
  type: "index"
  description: "Migrated from v6"
  static: false
MONITOR:
  reserved: true
  hidden: false
  allowed_actions:
  - "INDICES_MONITOR"
  type: "index"
  description: "Migrated from v6"
  static: false
DATA_ACCESS:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/*"
  - "CRUD"
  type: "index"
  description: "Migrated from v6"
  static: false
CREATE_INDEX:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:admin/create"
  - "indices:admin/mapping/put"
  type: "index"
  description: "Migrated from v6"
  static: false
WRITE:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/write*"
  - "indices:admin/mapping/put"
  type: "index"
  description: "Migrated from v6"
  static: false
MANAGE_ALIASES:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:admin/aliases*"
  type: "index"
  description: "Migrated from v6"
  static: false
READ:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/read*"
  - "indices:admin/mappings/fields/get*"
  type: "index"
  description: "Migrated from v6"
  static: false
INDICES_ALL:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:*"
  type: "index"
  description: "Migrated from v6"
  static: false
DELETE:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/write/delete*"
  type: "index"
  description: "Migrated from v6"
  static: false
CLUSTER_COMPOSITE_OPS:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/write/bulk"
  - "indices:admin/aliases*"
  - "indices:data/write/reindex"
  - "CLUSTER_COMPOSITE_OPS_RO"
  type: "cluster"
  description: "Migrated from v6"
  static: false
CLUSTER_COMPOSITE_OPS_RO:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/read/mget"
  - "indices:data/read/msearch"
  - "indices:data/read/mtv"
  - "indices:data/read/coordinate-msearch*"
  - "indices:admin/aliases/exists*"
  - "indices:admin/aliases/get*"
  - "indices:data/read/scroll"
  type: "cluster"
  description: "Migrated from v6"
  static: false
GET:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/read/get*"
  - "indices:data/read/mget*"
  type: "index"
  description: "Migrated from v6"
  static: false
MANAGE:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:monitor/*"
  - "indices:admin/*"
  type: "index"
  description: "Migrated from v6"
  static: false
CLUSTER_MONITOR:
  reserved: true
  hidden: false
  allowed_actions:
  - "cluster:monitor/*"
  type: "cluster"
  description: "Migrated from v6"
  static: false
MANAGE_SNAPSHOTS:
  reserved: true
  hidden: false
  allowed_actions:
  - "cluster:admin/snapshot/*"
  - "cluster:admin/repository/*"
  type: "index"
  description: "Migrated from v6"
  static: false
INDEX:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/write/index*"
  - "indices:data/write/update*"
  - "indices:admin/mapping/put"
  - "indices:data/write/bulk*"
  type: "index"
  description: "Migrated from v6"
  static: false
UNLIMITED:
  reserved: true
  hidden: false
  allowed_actions:
  - "*"
  type: "index"
  description: "Migrated from v6"
  static: false
INDICES_MONITOR:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:monitor/*"
  type: "index"
  description: "Migrated from v6"
  static: false
SUGGEST:
  reserved: true
  hidden: false
  allowed_actions:
  - "indices:data/read/suggest*"
  type: "index"
  description: "Migrated from v6"
  static: false

sg_roles_mapping.yml:

---
_sg_meta:
  type: "rolesmapping"
  config_version: 2
Elk-Users:
  reserved: false
  hidden: false
  backend_roles:
  - "kibanauser"
  - "readall"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_all_access:
  reserved: true
  hidden: false
  backend_roles:
  - "admin"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_own_index:
  reserved: false
  hidden: false
  backend_roles: []
  hosts: []
  users:
  - "*"
  and_backend_roles: []
  description: "Migrated from v6"
sg_logstash:
  reserved: false
  hidden: false
  backend_roles:
  - "logstash"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
Elk-Admins:
  reserved: true
  hidden: false
  backend_roles:
  - "admin"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_kibana_user:
  reserved: false
  hidden: false
  backend_roles:
  - "kibanauser"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_readall:
  reserved: true
  hidden: false
  backend_roles:
  - "readall"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_manage_snapshots:
  reserved: true
  hidden: false
  backend_roles:
  - "snapshotrestore"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"
sg_kibana_server:
  reserved: true
  hidden: false
  backend_roles: []
  hosts: []
  users:
  - "kibanaserver"
  and_backend_roles: []
  description: "Migrated from v6"

sg_roles.yml:

---
_sg_meta:
  type: "roles"
  config_version: 2
Elk-Users:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "INDICES_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  index_permissions:
  - index_patterns:
    - "?kibana"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "DELETE"
    - "MANAGE"
    - "INDEX"
  - index_patterns:
    - "?kibana-6"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "DELETE"
    - "MANAGE"
    - "INDEX"
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "indices:data/read/*"
  tenant_permissions: []
  static: false
sg_own_index:
  reserved: false
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "CLUSTER_COMPOSITE_OPS"
  index_permissions:
  - index_patterns:
    - "${user_name}"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  tenant_permissions: []
  static: false
sg_xp_monitoring:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "cluster:monitor/xpack/info"
  - "cluster:monitor/main"
  - "cluster:admin/xpack/monitoring/bulk"
  index_permissions:
  - index_patterns:
    - "?monitor*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  tenant_permissions: []
  static: false
sg_kibana_user:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "INDICES_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  index_permissions:
  - index_patterns:
    - "?kibana"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "DELETE"
    - "MANAGE"
    - "INDEX"
  - index_patterns:
    - "?kibana-6"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "DELETE"
    - "MANAGE"
    - "INDEX"
  - index_patterns:
    - "?kibana_*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "DELETE"
    - "MANAGE"
    - "INDEX"
  - index_patterns:
    - "?tasks"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?management-beats"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "indices:data/read/xpack/rollup*"
    - "indices:admin/mappings/get*"
    - "indices:admin/get"
    - "indices:data/read/field_caps*"
  - index_patterns:
    - "?.kibana_*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  tenant_permissions: []
  static: false
sg_manage_snapshots:
  reserved: false
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "MANAGE_SNAPSHOTS"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "indices:data/write/index"
    - "indices:admin/create"
  tenant_permissions: []
  static: false
sg_xp_alerting:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "indices:data/read/scroll"
  - "cluster:admin/xpack/watcher*"
  - "cluster:monitor/xpack/watcher*"
  index_permissions:
  - index_patterns:
    - "?watches*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?watcher-history-*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?triggered_watches"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "indices:admin/aliases/get"
  tenant_permissions: []
  static: false
sg_xp_machine_learning:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "cluster:admin/persistent*"
  - "cluster:internal/xpack/ml*"
  - "indices:data/read/scroll*"
  - "cluster:admin/xpack/ml*"
  - "cluster:monitor/xpack/ml*"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
    - "indices:admin/get*"
  - index_patterns:
    - "?ml-*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "*"
  tenant_permissions: []
  static: false
sg_kibana_server:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  - "cluster:admin/xpack/monitoring*"
  - "indices:admin/template*"
  - "indices:data/read/scroll*"
  index_permissions:
  - index_patterns:
    - "?kibana"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?kibana-6"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?kibana_*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
    - "CREATE_INDEX"
  - index_patterns:
    - "?reporting*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?monitoring*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?tasks"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "?management-beats*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "INDICES_ALL"
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "indices:admin/aliases*"
  tenant_permissions: []
  static: false
sg_all_access:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "UNLIMITED"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "UNLIMITED"
  tenant_permissions:
  - tenant_patterns:
    - "admin_tenant"
    allowed_actions:
    - "SGS_KIBANA_ALL_WRITE"
  static: false
sg_logstash:
  reserved: false
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS"
  - "indices:admin/template/get"
  - "indices:admin/template/put"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "CRUD"
    - "CREATE_INDEX"
  - index_patterns:
    - "*beat*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "CRUD"
    - "CREATE_INDEX"
  tenant_permissions: []
  static: false
Elk-Admins:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "UNLIMITED"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "UNLIMITED"
  tenant_permissions:
  - tenant_patterns:
    - "*"
    allowed_actions:
    - "UNLIMITED"
  - tenant_patterns:
    - "admin_tenant"
    allowed_actions:
    - "SGS_KIBANA_ALL_WRITE"
  static: false
sg_readall_and_monitor:
  reserved: false
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "CLUSTER_MONITOR"
  - "CLUSTER_COMPOSITE_OPS_RO"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
  tenant_permissions: []
  static: false
sg_readall:
  reserved: true
  hidden: false
  description: "Migrated from v6 (all types mapped)"
  cluster_permissions:
  - "CLUSTER_COMPOSITE_OPS_RO"
  index_permissions:
  - index_patterns:
    - "*"
    dls: null
    fls: null
    masked_fields: null
    allowed_actions:
    - "READ"
  tenant_permissions: []
  static: false

elasticsearch.yml configuration file

elasticsearch.yml
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: ELK
cluster.max_shards_per_node: 20000
cluster.initial_master_nodes:
  - host1
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: ${HOSTNAME}
node.attr.box_type: hot
node.master: true
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /home/elasticsearch
#
# Path to log files:
#
path.logs: /var/log/elasticsearch
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# --------------------------------- Processor ----------------------------------
thread_pool:
    warmer:
        core: 1
        max: 6
        keep_alive: 2m

# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: host1
#network.host: localhost
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when new node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.zen.ping.unicast.hosts: ["host1", "host2"]
#discovery.zen.ping.multicast.enabled: false
#
# Prevent the "split brain" by configuring the majority of nodes (total number of master-eligible nodes / 2 + 1):
#
#discovery.zen.minimum_master_nodes: 3
#
# For more information, consult the zen discovery module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#xpack.security.enabled: false
#xpack.security.authc:
#  anonymous:
#    username: anonymous_user
#    roles: role1, role2
#    authz_exception: true

searchguard.allow_unsafe_democertificates: false
searchguard.audit.type: internal_elasticsearch
searchguard.enable_snapshot_restore_privilege: true
searchguard.check_snapshot_restore_write_privileges: true
searchguard.restapi.roles_enabled: ["sg_all_access","Elk-Admins"]
cluster.routing.allocation.disk.threshold_enabled: false
discovery.zen.minimum_master_nodes: 1
node.max_local_storage_nodes: 3
xpack.security.enabled: false
..SG LDAP SETTINGS SCRUBBED..

#Enable Enterprise features
searchguard.enterprise_modules_enabled: true
#Map LDAP roles to SG
searchguard.roles_mapping_resolution: BOTH

kibana.yml configuration file:
kibana.yml:

searchguard.readonly_mode.roles: ["sg_kibana_user", "Elk-Users"]
# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "host1"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URL of the Elasticsearch instance to use for all your queries.
# Use HTTPS instead of HTTP
elasticsearch.hosts: "https://host1:9200"

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "discover"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
# Configure the Kibana internal server user
elasticsearch.username: "kibanaserver"
elasticsearch.password: "kibanaserver"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files validate that your Elasticsearch backend uses the same key files.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
# Disable SSL verification because we use self-signed demo certificates
elasticsearch.ssl.verificationMode: none

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
elasticsearch.requestTimeout: 60000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
# Whitelist the Search Guard Multi Tenancy Header
elasticsearch.requestHeadersWhitelist: [ "Authorization", "sgtenant" ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 0

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
logging.dest: /var/log/kibana.log

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# The default locale. This locale can be used in certain circumstances to substitute any missing
# translations.
#i18n.defaultLocale: "en"
xpack.security.enabled: false

Can you please assign the “kibanauser” backend role to Elk-Admins and try again?

Thank you for the reply. I was able to add kibanauser to the migrated sg_roles_mapping.yml

Elk-Admins:
  reserved: true
  hidden: false
  backend_roles:
  - "admin"
  - "kibanauser"
  hosts: []
  users: []
  and_backend_roles: []
  description: "Migrated from v6"

updated using the new configs generated from the miration:

Contacting elasticsearch cluster 'ELK' and wait for YELLOW clusterstate ...
Clustername: ELK
Clusterstate: GREEN
Number of nodes: 2
Number of data nodes: 2
searchguard index already exists, so we do not need to create one.
Populate config from /usr/share/elasticsearch/plugins/search-guard-7/my_migrate_dir/v7
my_migrate_dir/v7/sg_action_groups.yml OK
my_migrate_dir/v7/sg_internal_users.yml OK
my_migrate_dir/v7/sg_roles.yml OK
my_migrate_dir/v7/sg_roles_mapping.yml OK
my_migrate_dir/v7/sg_config.yml OK
my_migrate_dir/v7/sg_tenants.yml OK
Will update '_doc/config' with my_migrate_dir/v7/sg_config.yml
   SUCC: Configuration for 'config' created or updated
Will update '_doc/roles' with my_migrate_dir/v7/sg_roles.yml
   SUCC: Configuration for 'roles' created or updated
Will update '_doc/rolesmapping' with my_migrate_dir/v7/sg_roles_mapping.yml
   SUCC: Configuration for 'rolesmapping' created or updated
Will update '_doc/internalusers' with my_migrate_dir/v7/sg_internal_users.yml
   SUCC: Configuration for 'internalusers' created or updated
Will update '_doc/actiongroups' with my_migrate_dir/v7/sg_action_groups.yml
   SUCC: Configuration for 'actiongroups' created or updated
Will update '_doc/tenants' with my_migrate_dir/v7/sg_tenants.yml
   SUCC: Configuration for 'tenants' created or updated
Done with success

When I attempt to save something in the Advanced Settings section of Kibana, I still get a 403 error code.

2019-08-12%2010_37_25-Kibana

Does the logs still look like this:?

[2019-08-06T09:40:03,243][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] sgr: [sg_own_index, Elk-Admins]
[2019-08-06T09:40:03,243][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] raw requestedTenant: 'null'
[2019-08-06T09:40:03,243][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] request class org.elasticsearch.action.update.UpdateRequest
[2019-08-06T09:40:03,243][WARN ][c.f.s.c.PrivilegesInterceptorImpl] [hostname] Tenant SGS_GLOBAL_TENANT is not allowed to write (user: LDAP User)

Can you please provide more logs on debug level so that we can see what happend before?

Correct, still getting that same message in the logs. Here’s what comes before and after it with debug logging enabled for SearchGuard. Line 15 in the log below is the warning.

[2019-08-13T09:10:52,181][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Check authdomain for rest internal/4 or 2 in total
[2019-08-13T09:10:52,182][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Rest user 'User [name=CN=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]' is authenticated
[2019-08-13T09:10:52,182][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] sgtenant 'null'
[2019-08-13T09:10:52,188][DEBUG][c.f.s.s.ConfigModelV7    ] [hostname] Pass backendroles from User [name=CN=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]
[2019-08-13T09:10:52,188][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] ### evaluate permissions for User [name=CN=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null] on hostname
[2019-08-13T09:10:52,188][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] action: indices:data/write/update (UpdateRequest)
[2019-08-13T09:10:52,189][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] Resolve aliases, indices and types from UpdateRequest
[2019-08-13T09:10:52,189][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] Resolved pattern [.kibana] to [.kibana_4]
[2019-08-13T09:10:52,189][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] requestedResolved : Resolved [aliases=[.kibana], indices=[], allIndices=[.kibana_4], types=[*], originalRequested=[.kibana], remoteIndices=[]]
[2019-08-13T09:10:52,189][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] requested [indices:data/write/update] from 192.168.2.5:58022
[2019-08-13T09:10:52,189][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] requested resolved indextypes: Resolved [aliases=[.kibana], indices=[], allIndices=[.kibana_4], types=[*], originalRequested=[.kibana], remoteIndices=[]]
[2019-08-13T09:10:52,190][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] sgr: [sg_own_index, Elk-Admins]
[2019-08-13T09:10:52,190][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] raw requestedTenant: 'null'
[2019-08-13T09:10:52,190][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] request class org.elasticsearch.action.update.UpdateRequest
[2019-08-13T09:10:52,190][WARN ][c.f.s.c.PrivilegesInterceptorImpl] [hostname] Tenant SGS_GLOBAL_TENANT is not allowed to write (user: CN=LDAP User)
[2019-08-13T09:10:52,190][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] Result from privileges interceptor: true
[2019-08-13T09:10:52,192][DEBUG][c.f.s.f.SearchGuardFilter] [hostname] PrivEvalResponse [allowed=false, missingPrivileges=[indices:data/write/update], allowedFlsFields=null, maskedFields=null, queries=null]
[2019-08-13T09:10:52,193][DEBUG][c.f.s.f.SearchGuardFilter] [hostname] no permissions for [indices:data/write/update]
[2019-08-13T09:10:52,201][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Check authdomain for rest internal/4 or 2 in total
[2019-08-13T09:10:52,201][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Rest user 'User [name=kibanaserver, backend_roles=[], requestedTenant=null]' is authenticated
[2019-08-13T09:10:52,202][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] sgtenant 'null'
[2019-08-13T09:10:52,202][DEBUG][c.f.s.s.ConfigModelV7    ] [hostname] Pass backendroles from User [name=kibanaserver, backend_roles=[], requestedTenant=null]
[2019-08-13T09:10:52,202][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] ### evaluate permissions for User [name=kibanaserver, backend_roles=[], requestedTenant=null] on hostname
[2019-08-13T09:10:52,202][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] action: cluster:monitor/xpack/info (XPackInfoRequest)
[2019-08-13T09:10:52,202][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] Resolve aliases, indices and types from XPackInfoRequest
[2019-08-13T09:10:52,202][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] class org.elasticsearch.protocol.xpack.XPackInfoRequest not supported (It is likely not a indices related request)
[2019-08-13T09:10:52,202][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] requestedResolved : Resolved [aliases=[*], indices=[*], allIndices=[*], types=[*], originalRequested=[], remoteIndices=[]]
[2019-08-13T09:10:52,203][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] raw requestedTenant: 'null'
[2019-08-13T09:10:52,203][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] Result from privileges interceptor for cluster perm: null
[2019-08-13T09:10:52,203][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] Allowed because we have cluster permissions for cluster:monitor/xpack/info
[2019-08-13T09:10:52,203][DEBUG][c.f.s.f.SearchGuardFilter] [hostname] PrivEvalResponse [allowed=true, missingPrivileges=[], allowedFlsFields=null, maskedFields=null, queries=null]
[2019-08-13T09:10:52,442][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Check authdomain for rest internal/4 or 2 in total
[2019-08-13T09:10:52,442][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] Rest user 'User [name=logstash, backend_roles=[logstash], requestedTenant=null]' is authenticated
[2019-08-13T09:10:52,442][DEBUG][c.f.s.a.BackendRegistry  ] [hostname] sgtenant 'null'
[2019-08-13T09:10:52,442][DEBUG][c.f.s.s.ConfigModelV7    ] [hostname] Pass backendroles from User [name=logstash, backend_roles=[logstash], requestedTenant=null]
[2019-08-13T09:10:52,443][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] ### evaluate permissions for User [name=logstash, backend_roles=[logstash], requestedTenant=null] on hostname
[2019-08-13T09:10:52,443][DEBUG][c.f.s.p.PrivilegesEvaluator] [hostname] action: indices:data/write/bulk (BulkRequest)
[2019-08-13T09:10:52,443][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] Resolve aliases, indices and types from BulkRequest
[2019-08-13T09:10:52,443][DEBUG][c.f.s.r.IndexResolverReplacer] [hostname] Resolved pattern [client-env-2019.08.13] to [client-env-2019.08.13]

Should I open a bug report or a support ticket?

If you have access to our support portal we always recommend to file an issue there because the support forum is a “best effort” support.

Sorry for jumping in here but the issue is not 100% clear to me at the moment.

@mattsdevop just to clarify:

From your kibana.yml I can see that you are not using the Kibana multi-tenancy feature, correct?

The default setting is that multi-tenancy is disabled by default, and I don’t see a config setting that enables it. So for the moment, I assume you are not using it, and have not used it in your 6.x setup. Please correct me if I am wrong.

This line here in the log file also makes me believe you are not using multi-tenancy:

[2019-08-06T09:40:03,243][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [hostname] raw requestedTenant: 'null'

Which means when Kibana talks to Elasticsearch, no tenant was selected by the user (Elk-Admins in your case).

If you are not using multi-tenancy, then the default (and only) tenant is GLOBAL. And, consequently, you would not have to deal with any tenant settings at all.

However, from the authinfo call I can see that the user does not have write permissions to the global tenant:

 "sg_tenants" : {
    "LDAP User" : true,
    "SGS_GLOBAL_TENANT" : false,
    "admin_tenant" : true

(“false” means RO access).

If I am correct with my assumptions so far, then it seems the migration from 6.x to 7.x was faulty. There are two errors that I can spot:

Kibana user roles mapping

The (dynamic) sg_kibana_user role has been replaced with the (static) SGS_KIBANA_USER role. So you need to map all Kibana users to that static role. In your roles_mapping.yml, please add:

SGS_KIBANA_USER:
  backend_roles:
  - "kibanauser"

Tenants configuration

If you do not use multi-tenancy, please remove all tenant_permissions settings from all your role definitions.

If I am wrong about your multi-tenancy usage please let me know. In any case, please open an issue on our support portal or GitHub. This is a migration issue we need to examine more closely. Thx!

Thank you for looking into this @jkressin. First, you are correct, we were not using multi-tenant prior to the upgrade. I tried your suggestions. Here is the result:

SGS_KIBANA_USER:
  backend_roles:
  - "kibanauser"

to roles_mapping.yml.
I also removed all tenant_permissions from role definitions.
Whether I have the new definition in roles_mapping.yml or not, I still receive the same error after removing the tenant_permissions settings from all role definitions. After reloading using sgadmin, this is what’s seen when accessing the URL for Kibana dashboard:

{"message":"no permissions for [indices:data/read/search] and User [name=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]: [security_exception] no permissions for [indices:data/read/search] and User [name=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]","statusCode":403,"error":"Forbidden"}

If I login with the local admin account, Kibana loads like it should. I am also able to Save settings in the Advanced section without issue. If I login with my LDAP User, I am presented with a white page with the above text on it.
I’ve reported the issue with migration from 6.7 → 7.x on github here: https://github.com/floragunncom/search-guard/issues/725
I was posting here in hopes that anything learned would be available publicly for others running into this issue in the future.

So to recap: After you made the suggested changes and applied the changes via sgadmin, you do not see the tenant permission error anymore. Instead, when logging in with the LDAP user, you see:

{"message":"no permissions for [indices:data/read/search] and User [name=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]: [security_exception] no permissions for [indices:data/read/search] and User [name=LDAP User, backend_roles=[Elk-Admins, Web-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]","statusCode":403,"error":"Forbidden"}

If this happens, you should also see an error in the Elasticsearch logfile. The messages in the ES logfile are a bit more verbose than the Kibana messages. Can you please check for this error in the ES logs and post it here?

Another thing to check is the authinfo HTTP endpoint, which spits out information about the current user:

Can you please visit this endpoint with the LDAP user and post the results here? Since I’m suspecting some kind of role mapping error, we need to check the user’s SG roles. You can use curl like:

curl -k -u ldpauser:ldappass -XGET "https://mycluster.com:9200/_searchguard/authinfo?pretty"

Also thanks for reporting the issue on GitHub, we will check what went wrong during the migration in your case.

It appears that the issue is still a 403 permission issue. After logging in with the LDAP user with the new configuration (remove all tenant_permissions settings and add the kibanauser roles mapping, then reload using sgadmin) where the white page with the Kibana permission error appears, a similar tenant permission error is presented in the elasticsearch logs at the same time:

[2019-08-21T08:46:21,197][WARN ][c.f.s.c.PrivilegesInterceptorImpl] [hostname] Tenant SGS_GLOBAL_TENANT is not allowed for user CN=LDAP User
[2019-08-21T08:46:21,220][INFO ][o.e.c.m.MetaDataMappingService] [hostname] [sg7-auditlog-2019.08.21/5OvXBw0YQZuuctqonVjcOw] update_mapping [_doc]

This time, instead of the elasticsearch error being Tenant SGS_GLOBAL_TENANT is not allowed to write it’s simply Tenant SGS_GLOBAL_TENANT is not allowed for user

Checking the SG roles, this is the result:

{
  "user" : "User [name=LDAP User, backend_roles=[Elk-Admins, Other-Admins, Slack-Users, Operations, Employees, All, CustomerSupport, Other-Admins, Other-Users], requestedTenant=null]",
  "user_name" : "LDAP User",
  "user_requested_tenant" : null,
  "remote_address" : "host1:59112",
  "backend_roles" : [
    "Elk-Admins",
    "Other-Admins",
    "Slack-Users",
    "Operations",
    "Employees",
    "All",
    "CustomerSupport",
    "Web-Admins",
    "Other-Users"
  ],
  "custom_attribute_names" : [
    "attr.ldap.primaryGroupID",
    "attr.ldap.lastLogon",
    "attr.ldap.logonCount",
    "attr.ldap.proxyAddresses",
    "attr.ldap.badPwdCount",
    "attr.ldap.objectGUID",
    "attr.ldap.userAccountControl",
    "attr.ldap.company",
    "attr.ldap.whenCreated",
    "ldap.original.username",
    "attr.ldap.physicalDeliveryOfficeName",
    "attr.ldap.lastLogoff",
    "attr.ldap.countryCode",
    "attr.ldap.instanceType",
    "attr.ldap.sAMAccountName",
    "attr.ldap.userPrincipalName",
    "attr.ldap.whenChanged",
    "attr.ldap.objectClass",
    "attr.ldap.givenName",
    "ldap.dn",
    "attr.ldap.sAMAccountType",
    "attr.ldap.cn",
    "attr.ldap.accountExpires",
    "attr.ldap.description",
    "attr.ldap.dSCorePropagationData",
    "attr.ldap.lockoutTime",
    "attr.ldap.name",
    "attr.ldap.uSNCreated",
    "attr.ldap.uSNChanged",
    "attr.ldap.displayName",
    "attr.ldap.objectSid",
    "attr.ldap.codePage",
    "attr.ldap.pwdLastSet",
    "attr.ldap.sn",
    "attr.ldap.mobile",
    "attr.ldap.mail",
    "attr.ldap.lastLogonTimestamp"
  ],
  "sg_roles" : [
    "All",
    "Other-Admins",
    "CustomerSupport",
    "Elk-Admins",
    "Jenkins-Admins",
    "Operations",
    "Other-Users",
    "Employees",
    "Slack-Users",
    "sg_own_index"
  ],
  "sg_tenants" : {
    "LDAP User" : true
  },
  "principal" : null,
  "peer_certificates" : "0",
  "sso_logout_url" : null
}

Any other ideas or things to try?

So to me it seems that for one reason or the other (wich we will investigate in the GitHub Issue you created, thanks!) none of the roles you use has the SGS_GLOBAL_TENANT available. Usually, the migration process should handle that, but in your case it did not work.

So, the next thing to try is to add the tenant manually. If there is a Search Guard role that all Kibana users share, I recommend to add it to this role. I see a SG role “All”, maybe this is the right one to choose.

The goal here is that all users that use Kibana have permission to use the SGS_GLOBAL_TENANT.

The SGS_GLOBAL_TENANT is shipped already with Search Guard, so no need to create it manually.

Please first try to grant permissions to all tenants by adding the following snippet to the Kibana user role, possibly the “All” role:

  tenant_permissions:
    - tenant_patterns:
        - "*"
      allowed_actions:
        - "SGS_KIBANA_ALL_WRITE"

So your role definition looks something like:

All:
  cluster_permissions:
    - ...
  index_permissions:
    - index_patterns:
        - ...
      allowed_actions:
        - ...
  tenant_permissions:
    - tenant_patterns:
        - "*"
      allowed_actions:
        - "SGS_KIBANA_ALL_WRITE"
1 Like

Thank you. The solution that worked for me is adding the lines you provided to the Elk-Admins and Elk-Users sg_roles.yml file, then reloading the configuration using sgadmin.

Cool, sorry for the inconvenience and thanks for your patience. We’ll dig into the root cause now.

1 Like

Just for the record, I spent the last 4 days trying to fix the problem and jkressin’s answer indeed solves the problem.

Thank you.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.