Elasticsearch version:
elasticsearch-oss, 7.6.1
Kibana version (if relevant):
opendistroforelasticsearch-kibana/stable, 1.7.0
Describe the issue:
I can create a user, backend-role, security-role to access: _cat/health
I can create a user, backend-role, security-role to access: _cluster/health
I am unable to create a user, backend-role, security-role to access: _cluster/health/kibana
Steps to reproduce:
- add to internal_users.yml
health_user: hash: "$2y$12$iMa6xPxkuBMKBWSghI61auv3UG2dtfNxTfmt2hBnC/ulcghWQJ/Qi" reserved: true backend_roles: - "health_backend_role" description: "health user
- add a security role to roles.yml
health_security_role: reserved: true index_permissions: - index_patterns: - "_cluster/health/*" - "_cat/indices/*kibana*" cluster_permissions: - cluster:* - cluster:*/* - cluster:*/*/* - cluster:_health/* - indices:*
-
Map the backend_role to the security role in roles_mapping.yml
health_security_role: reserved: true backend_roles: - "health_backend_role" description: "Add health_security policy to kibana_readonly__backend"
My configurations:
elasticsearch.yml
> path.logs: /var/log/elasticsearch
> path.data: /var/lib/elasticsearch
>
> ######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
> opendistro_security.ssl.transport.pemcert_filepath: esnode.pem
> opendistro_security.ssl.transport.pemkey_filepath: esnode-key.pem
> opendistro_security.ssl.transport.pemtrustedcas_filepath: root-ca.pem
> opendistro_security.ssl.transport.enforce_hostname_verification: false
> opendistro_security.ssl.http.enabled: true
> opendistro_security.ssl.http.pemcert_filepath: esnode.pem
> opendistro_security.ssl.http.pemkey_filepath: esnode-key.pem
> opendistro_security.ssl.http.pemtrustedcas_filepath: root-ca.pem
> opendistro_security.allow_unsafe_democertificates: true
> opendistro_security.allow_default_init_securityindex: true
> opendistro_security.authcz.admin_dn:
> - CN=kirk,OU=client,O=client,L=test, C=de
>
> opendistro_security.audit.type: internal_elasticsearch
> opendistro_security.enable_snapshot_restore_privilege: true
> opendistro_security.check_snapshot_restore_write_privileges: true
> opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
>
> cluster.routing.allocation.disk.threshold_enabled: false
> node.max_local_storage_nodes: 3
> ######## End OpenDistro for Elasticsearch Security Demo Configuration ########
kibana.yml
opendistro_security.multitenancy.enabled: false
opendistro_security.multitenancy.tenants.enable_global: true
opendistro_security.multitenancy.tenants.enable_private: falseopendistro_security.readonly_mode.roles: [“kibana_read_only”]
opendistro_security.cookie.ttl: 86400000
opendistro_security.session.ttl: 86400000
opendistro_security.session.keepalive: true
Expected behavior:
Should be able to to do
_cat/health
% curl -k -u health_user:health_user https://localhost:9200/_cat/health
600186711 16:18:31 elasticsearch yellow 1 1 4 4 0 0 2 0 - 66.7%
That worked as expected.
_cluster/health
curl -k -u health_user:health_user https://0.0.0.0:9200/_cluster/health
{“cluster_name”:“elasticsearch”,“status”:“yellow”,“timed_out”:false,“number_of_nodes”:1,“number_of_data_nodes”:1,“active_primary_shards”:4,“active_shards”:4,“relocating_shards”:0,“initializing_shards”:0,“unassigned_shards”:2,“delayed_unassigned_shards”:0,“number_of_pending_tasks”:0,“number_of_in_flight_fetch”:0,“task_max_waiting_in_queue_millis”:0,“active_shards_percent_as_number”:66.66666666666666}
That worked as expected.
_cluster/health/*kibana*
curl -k -u health_user:health_user https://0.0.0.0:9200/_cluster/health/*kibana*
{“error”:{“root_cause”:[{“type”:“security_exception”,“reason”:“no permissions for and User [name=health_user, backend_roles=[health_backend_role], requestedTenant=null]”}],“type”:“security_exception”,“reason”:“no permissions for and User [name=health_user, backend_roles=[health_backend_role], requestedTenant=null]”},“status”:403}
This one did not work as expected.
The error message in elasticsearch.log is
`
[2020-09-15T11:51:04,566][WARN ][c.a.o.s.c.PrivilegesInterceptorImpl] [es-master] Tenant global_tenant is not allowed for user health_user
`
- Note that the last command does work as expected using an admin user:
`
curl -k -u admin:xxx https://0.0.0.0:9200/_cluster/health/*kibana*
`
{“cluster_name”:“elasticsearch”,“status”:“green”,“timed_out”:false,“number_of_nodes”:1,“number_of_data_nodes”:1,“active_primary_shards”:1,“active_shards”:1,“relocating_shards”:0,“initializing_shards”:0,“unassigned_shards”:0,“delayed_unassigned_shards”:0,“number_of_pending_tasks”:0,“number_of_in_flight_fetch”:0,“task_max_waiting_in_queue_millis”:0,“active_shards_percent_as_number”:100.0}
and there is no error message about the global_tenant when I run as admin.