Global_tenant is not allowed for user

Elasticsearch version:
elasticsearch-oss, 7.6.1

Kibana version (if relevant):
opendistroforelasticsearch-kibana/stable, 1.7.0

Describe the issue:
I can create a user, backend-role, security-role to access: _cat/health
I can create a user, backend-role, security-role to access: _cluster/health
I am unable to create a user, backend-role, security-role to access: _cluster/health/kibana

Steps to reproduce:

  1. add to internal_users.yml
health_user:
  hash: "$2y$12$iMa6xPxkuBMKBWSghI61auv3UG2dtfNxTfmt2hBnC/ulcghWQJ/Qi"
  reserved: true
  backend_roles:
  - "health_backend_role"
  description: "health user
  1. add a security role to roles.yml
health_security_role:
  reserved: true
  index_permissions:
    - index_patterns:
      - "_cluster/health/*"
      - "_cat/indices/*kibana*"
  cluster_permissions:
    - cluster:*
    - cluster:*/*
    - cluster:*/*/*
    - cluster:_health/*
    - indices:*
  1. Map the backend_role to the security role in roles_mapping.yml

     health_security_role:
       reserved: true
       backend_roles:
       - "health_backend_role"
       description: "Add health_security policy to kibana_readonly__backend"
    

My configurations:
elasticsearch.yml

> path.logs: /var/log/elasticsearch
> path.data: /var/lib/elasticsearch
> 
> ######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
> opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
> opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
> opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
> opendistro_security.ssl.transport.enforce_hostname_verification: false
> opendistro_security.ssl.http.enabled: true
> opendistro_security.ssl.http.pemcert_filepath: esnode.pem
> opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
> opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
> opendistro_security.allow_unsafe_democertificates: true
> opendistro_security.allow_default_init_securityindex: true
> opendistro_security.authcz.admin_dn:
>   - CN=kirk,OU=client,O=client,L=test, C=de
> 
> opendistro_security.audit.type: internal_elasticsearch
> opendistro_security.enable_snapshot_restore_privilege: true
> opendistro_security.check_snapshot_restore_write_privileges: true
> opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
> 
> cluster.routing.allocation.disk.threshold_enabled: false
> node.max_local_storage_nodes: 3
> ######## End OpenDistro for Elasticsearch Security Demo Configuration ########

kibana.yml

opendistro_security.multitenancy.enabled: false
opendistro_security.multitenancy.tenants.enable_global: true
opendistro_security.multitenancy.tenants.enable_private: false

opendistro_security.readonly_mode.roles: [“kibana_read_only”]
opendistro_security.cookie.ttl: 86400000
opendistro_security.session.ttl: 86400000
opendistro_security.session.keepalive: true

Expected behavior:

Should be able to to do

  1. _cat/health

% curl -k -u health_user:health_user https://localhost:9200/_cat/health

600186711 16:18:31 elasticsearch yellow 1 1 4 4 0 0 2 0 - 66.7%

That worked as expected.

  1. _cluster/health

curl -k -u health_user:health_user https://0.0.0.0:9200/_cluster/health

{“cluster_name”:“elasticsearch”,“status”:“yellow”,“timed_out”:false,“number_of_nodes”:1,“number_of_data_nodes”:1,“active_primary_shards”:4,“active_shards”:4,“relocating_shards”:0,“initializing_shards”:0,“unassigned_shards”:2,“delayed_unassigned_shards”:0,“number_of_pending_tasks”:0,“number_of_in_flight_fetch”:0,“task_max_waiting_in_queue_millis”:0,“active_shards_percent_as_number”:66.66666666666666}

That worked as expected.

  1. _cluster/health/*kibana*

curl -k -u health_user:health_user https://0.0.0.0:9200/_cluster/health/*kibana*

{“error”:{“root_cause”:[{“type”:“security_exception”,“reason”:“no permissions for and User [name=health_user, backend_roles=[health_backend_role], requestedTenant=null]”}],“type”:“security_exception”,“reason”:“no permissions for and User [name=health_user, backend_roles=[health_backend_role], requestedTenant=null]”},“status”:403}

This one did not work as expected.
The error message in elasticsearch.log is

`

[2020-09-15T11:51:04,566][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [es-master] Tenant global_tenant is not allowed for user health_user

`

  1. Note that the last command does work as expected using an admin user:

`

curl -k -u admin:xxx https://0.0.0.0:9200/_cluster/health/*kibana*

`

{“cluster_name”:“elasticsearch”,“status”:“green”,“timed_out”:false,“number_of_nodes”:1,“number_of_data_nodes”:1,“active_primary_shards”:1,“active_shards”:1,“relocating_shards”:0,“initializing_shards”:0,“unassigned_shards”:0,“delayed_unassigned_shards”:0,“number_of_pending_tasks”:0,“number_of_in_flight_fetch”:0,“task_max_waiting_in_queue_millis”:0,“active_shards_percent_as_number”:100.0}

and there is no error message about the global_tenant when I run as admin.

Seems to work with this change to roles.yml

_meta:
  type: "roles"
  config_version: 2

security_rest_api_access:
  reserved: true

health_security_role:
  reserved: true
  cluster_permissions:
    - cluster:*
    - cluster:_health/*
  tenant_permissions:
    - tenant_patterns:
      - "global_tenant"

Hello Alex,

one thing you have to keep in mind when configuring Search Guard is that the names of ES permissions are something different than paths in the REST API. So, for example, _cluster/health/* is only a path of a REST API, but it is neither an index nor the name of a permission. ES permission names map actually to the names of ES transport actions. Search Guard provides pre-defined action groups for the most common cases. These action groups are documented here:

So, for the health related actions, you’d need the action group SGS_CLUSTER_MONITOR in your cluster_permissions.

SGS_CLUSTER_MONITOR provides cluster permissions. Thus, right now, it is not possible to restrict monitor actions to a single index.

Regarding the tenant_permissions: The error message you are seeing might be caused from something different. I don’t think that this is related to the error. Also keep in mind that tenant permissions and index permissions always consist of patterns and a list of allowed actions. Without a list of allowed actions, these permissions are without any effect.

1 Like

Thank you Cliff for your reply. I know this forum is “best effort”.

You can see from my previous post that I had chopped down the cluster permissions and removed the index permissions. I think we are in agreement there.

The only time I see the message about “global tenant is not allowed for user” is when my request is rejected as unauthorized. The only solution I found was to add the above “tenant_permissions” section. If I remove that section, I get the error message and the unauthorized. If I put back in that tenant_permissions section, then my request is authorized and the error message goes away.

Since I’m not running multi-tenant, and other entries in the demo version of roles.yml don’t have the tenant_permissions section, I think it is likely that my hack should not be needed, but it does workaround the issue.

If you search the web for “global tenant is not allowed for user”, you will find several folks having similar issues. I hope that my solution will be useful to at least some folks. :slight_smile:

Well, regarding your config from your previous post, also the entry cluster:_health/* does not have any effect. It seems to mimic a REST API path, which is not evaluated here. I’d recommend to consider the recommendations I made above regarding the permissions. We are strongly recommending to use action groups as these will be maintained by Search Guard. New releases of ES might require different permissions. The action groups will provide these.

Regarding the “global tenant” tenant error message: I just noticed that you are testing with the kibana index. The kibana index gets a “special treatment” if multi tenancy is enabled. Thus, this might be the reason for the error messages. If you don’t use multi tenancy, you should disable it by editing sg_config.ymland setting multitenancy_enabled to false. Afterwards, use the sgconfig tool to update the configuration of the cluster.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.