Username showed in Kibana is always 'user'

Elasticsearch version:
7.10.2
Server OS version:
ubuntu-18.04
Kibana version (if relevant):
7.10.2
Browser version (if relevant):
Chrome 89.0.4389.82

SearchGuard Plugins version
49.0.0

Describe the issue:

In Kibana 7.10.2 we see ‘u’ userpic and ‘user’ shown if click on it.
Before be used 6.8.4 and there was real user name passed with x-proxy-user header in left bottom corner of Kibana.
Everything else works good.

Here is ‘_searchguard/authinfo’ of 6.8.4 (username passed with header is ‘admin’):

{
  "user" : "User [name=admin, roles=[offline_access, admin, uma_authorization, user], requestedTenant=null]",
  "user_name" : "admin",
  "user_requested_tenant" : null,
  "remote_address" : "127.0.0.1:52974",
  "backend_roles" : [
    "offline_access",
    "admin",
    "uma_authorization",
    "user"
  ],
  "custom_attribute_names" : [ ],
  "sg_roles" : [
    "sg_all_access",
    "sg_kibana_user",
    "sg_own_index"
  ],
  "sg_tenants" : {
    "admin_tenant" : true,
    "admin" : true
  },
  "principal" : null,
  "peer_certificates" : "0",
  "sso_logout_url" : null
} 

And for 7.10.2:

{
  "user" : "User [name=admin, backend_roles=[offline_access, admin, uma_authorization, user], requestedTenant=null]",
  "user_name" : "admin",
  "user_requested_tenant" : null,
  "remote_address" : "127.0.0.1:33528",
  "backend_roles" : [
    "offline_access",
    "admin",
    "uma_authorization",
    "user"
  ],
  "custom_attribute_names" : [ ],
  "attribute_names" : [ ],
  "sg_roles" : [
    "SGS_ALL_ACCESS",
    "SGS_KIBANA_USER",
  ],
  "sg_tenants" : {
    "admin" : true,
    "SGS_GLOBAL_TENANT" : true
  },
  "principal" : null,
  "peer_certificates" : "0",
  "sso_logout_url" : null
}

Expected behavior:

User able to see username passed to Kibana with some header as in 6.8.4

Provide configuration:
There is 7.10.2 config

elasticsearch/config/elasticsearch.yml

network.host: 0.0.0.0
cluster.name: elasticsearch
cluster.initial_master_nodes: -elasticsearch-master-0
discovery.seed_hosts: elasticsearch-discovery.project
elasticsearch-discovery.project
node.name: ${HOSTNAME}
node.processors: 2
searchguard.enterprise_modules_enabled: false
searchguard.ssl.http.clientauth_mode: OPTIONAL
searchguard.ssl.transport.pemcert_filepath: certificates/node.pem
searchguard.ssl.transport.pemkey_filepath: certificates/node.key
searchguard.ssl.transport.pemtrustedcas_filepath: certificates/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: certificates/node_http.pem
searchguard.ssl.http.pemkey_filepath: certificates/node_http.key
searchguard.ssl.http.pemtrustedcas_filepath: certificates/root-ca.pem
searchguard.nodes_dn:
- CN=node,O=project
searchguard.authcz.admin_dn:
- CN=sgadmin,O=project
action.auto_create_index: true
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:true}
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}

elasticsearch/plugins/search-guard-7/sgconfig/sg_config.yml

_sg_meta:
  type: "config"
  config_version: 2
sg_config:
  dynamic:
    filtered_alias_mode: "warn"
    disable_rest_auth: false
    disable_intertransport_auth: false
    respect_request_indices_options: false
    license: null
    auth_failure_listeners: {}
    do_not_fail_on_forbidden: true
    multi_rolespan_enabled: false
    hosts_resolver_mode: "ip-only"
    transport_userrname_attribute: null
    do_not_fail_on_forbidden_empty: true
    field_anonymization_salt2: null
    kibana:
      multitenancy_enabled: false
      server_username: "system.kibanaserver"
      index: ".kibana"
      rbac_enabled: false
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: true
        internalProxies: ".*"
        remoteIpHeader: "x-forwarded-for"
    authc:
      proxy_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          challenge: false
          type: "proxy"
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: "noop"
          config: {}
        skip_users: []
      clientcert_auth_domain:
        http_enabled: true
        transport_enabled: false
        order: 0
        http_authenticator:
          challenge: false
          type: "clientcert"
          config:
            username_attribute: "cn"
        authentication_backend:
          type: "noop"
          config: {}
        skip_users: []
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          challenge: true
          type: "basic"
          config: {}
        authentication_backend:
          type: "intern"
          config: {}
        skip_users: []
    authz:
      roles_from_another_ldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config: {}
        skipped_users: []
      roles_from_myldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            bind_dn: null
            password: null
            rolebase: "ou=groups,dc=example,dc=com"
            rolesearch: "(member={0})"
            userroleattribute: null
            userrolename: "disabled"
            rolename: "cn"
            resolve_nested_roles: true
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(uid={0})"
        skipped_users: []

image

Hi. Please share your kibana.yml

Hello
Here is kibana config file

server.host: "0.0.0.0"

server.rewriteBasePath: false


status.allowAnonymous: true

logging.quiet: false

# Default: 30000 Time in milliseconds to wait for responses from the back
end or Elasticsearch. This value must be a positive integer.

elasticsearch.requestTimeout: 300000

# Default: 30000 Time in milliseconds for Elasticsearch to wait for
responses from shards. Set to 0 to disable.

elasticsearch.shardTimeout: 0

server.basePath: "/kibana"

elasticsearch.hosts: "https://elasticsearch-client.ns:9200"

elasticsearch.username: "system.kibanaserver"

elasticsearch.password: "${KIBANA_PASSWORD}"


elasticsearch.ssl.verificationMode: "certificate"

elasticsearch.ssl.certificateAuthorities:
"/usr/share/kibana/config/certificates/root-ca.pem"

elasticsearch.ssl.certificate:
"/usr/share/kibana/config/certificates/kibana.pem"

elasticsearch.ssl.key: "/usr/share/kibana/config/certificates/kibana.key"

searchguard.allow_client_certificates: true


searchguard.auth.type: "proxycache"

searchguard.proxycache.proxy_header_ip: "127.0.0.1"


# The header that identifies the user - (required, no default)

searchguard.proxycache.user_header: x-proxy-user


# The header that identifies the user's role(s) - (required, no default)

searchguard.proxycache.roles_header: x-proxy-roles


elasticsearch.requestHeadersWhitelist: [ "Authorization", "sgtenant",
"x-forwarded-for", "x-forwarded-by", "x-proxy-user", "x-proxy-roles" ]

Hi. I see the problem. In 7.x, the username is read from the REST API. The proxycache stores the username in the cookie. SG needs to read the user from the cookie too. I’ve filed the issue: Proxycache. The username is not visible in the Kibana user menu (#352) · Issues · search-guard / Search Guard Kibana Plugin · GitLab

Why did you chose proxycache and not proxy?

Hello @trautw, I actually have one further question regarding this:

The proxycache auth type is actually a quite peculiar one. It is for situations where you can only rely on the presence of the x-proxy-user header during the first request to Kibana. Afterwards, the user data will be cached (hence the name proxycache).

In most situations, however, the x-proxy-user is present for all requests which reach Kibana. In these cases, you can just use the proxy auth type.

The Kibana config will be the similar, you just need to replace the keyword proxycache by proxy for all config options.

If your environment always provides a x-proxy-user header, I would recommend to try the proxy auth type instead of proxycache and check if this fixes the problem you are observing.

Hello.
Thanks for notification about deprecated proxycache.
I replaced it with proxy. Could you please clarify “for all config options”? I tried to replace searchguard.proxycache.* with searchguard.proxy.*, but Kibana failed because of validation of [searchguard].proxy failed.
Now I removed searchguard.proxycache.* params.
Kibana still shows “user” insead of “admin”
The _searchguard/authinfo request returns:

{
  "user" : "User [name=admin, backend_roles=[offline_access, admin, uma_authorization, user], requestedTenant=null]",
  "user_name" : "admin",
      ...

Regarding source code noticed by @srgbnd in created issue. Seems username should be provided with restapiinfo.user_name. I see browser log and /kibana/api/v1 requests failed:

{"statusCode":500,"error":"Internal Server Error","message":"security_exception","attributes":{"body":{"error":{"root_cause":[{"type":"security_exception","reason":"Unexpected exception indices:data/read/get"}],"type":"security_exception","reason":"Unexpected exception indices:data/read/get"},"status":500}}}`

In case of I request only /api/v1/restapiinfo, there are logs in elasticsearch:

│ elasticsearch {"type": "server", "timestamp": "2021-03-31T11:40:40,057Z", "level": "ERROR", "component": "c.f.s.f.SearchGuardFilter", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-client-84b4 │
│ 9f5c8b-759nr", "message": "Unexpected exception [_searchguard] InvalidIndexNameException[Invalid index name [_searchguard], must not start with '_'.]", "cluster.uuid": "4_5qVLmuSzirXadBLEOP1g", "node.id": "jn1rwlxfSt2WLg │
│ AbZ3bIzg" ,                                                                                                                                                                                                                  │
...
│ elasticsearch {"type": "server", "timestamp": "2021-03-31T11:40:40,062Z", "level": "WARN", "component": "r.suppressed", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-client-84b49f5c8b-759nr", │
│  "message": "path: /_searchguard/api/permissionsinfo, params: {index=_searchguard, id=permissionsinfo, type=api}", "cluster.uuid": "4_5qVLmuSzirXadBLEOP1g", "node.id": "jn1rwlxfSt2WLgAbZ3bIzg" ,                           │
│ elasticsearch "stacktrace": ["org.elasticsearch.ElasticsearchSecurityException: Unexpected exception indices:data/read/get",                                                                                                 │
│ elasticsearch "at com.floragunn.searchguard.filter.SearchGuardFilter.apply0(SearchGuardFilter.java:323) [search-guard-suite-security-7.10.2-49.0.0.jar:7.10.2-49.0.0]",                                                      │

I already reported it here: Invalid index name [_searchguard], must not start with '_'.]
Perhaps these problems are related to each other

I’m sorry, I was wrong there. When using searchguard.auth.type: "proxy", the configuration in sg_config.yml is actually sufficient. So, if you have the proper config in sg_config.yml, just deleting the other options in config/kibana.yml - as you did - is exactly the right thing.

Regarding the original problem: We had a closer look at the problem and now know the root cause. Unfortunately, there is no real workaround for the problem. We are going to release a fix for this ASAP.

Indeed, these are related. The error message is triggered when a Search Guard API is called which is only available on the SG Enterprise Edition.

Thanks again for the report! We will update you as soon as we have news here.

Thank you for research.
By some reason we freezed the ELK for 7.10.2 and using Kibana Plugin v 49.1.0. Could you please research for possibility of back merge fix to v49?

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.