Access forbidden on kibana space after upgrade to SG 7.16.3

Hello,

ELK 7.16.3
RHEL 7.9

Since we upgraded our ELK suite and SG to 7.16.3 our clients with RO access have an “Access Forbidden” when they try to visualize any dashboard located in a Kibana Space.

Steps to reproduce:

  1. Create a TESTTENANT tenant in kibana
  2. Create a TESTSPACE space in this tenant
  3. Create a TESTDASHBOARD in these space/tenant.
  4. In SG, create a TESTROLE Role with SGS_KIBANA_ALL_READ tenant permission on TESTTENANT
  5. Try following _bulk_resolve (it is what kibana does):

curl -XPOST -k -H "X-PROXY-USER: testuser" -H "X-PROXY-ROLES: TESTROLE" -H "kbn-xsrf: reporting" -H "X-Forwarded-For: 127.0.0.1" -H "sgtenant: TESTTENANT" -H 'Content-Type: application/json' http://TESTURL:TESTPORT/s/testspace/api/saved_objects/_bulk_resolve -d '[{"id":"TESTDASHBOARD_ID","type":"dashboard"}]'

Expected behavior
Retrieve saved object, instead we have a 403.

What we get
{"statusCode":403,"error":"Forbidden","message":"no permissions for [] and User TESTUSER<proxy> [backend_roles=[TESTROLE] requestedTenant=TESTTENANT]: security_exception: [security_exception] Reason: no permissions for [] and User TESTUSER<proxy> [backend_roles=[TESTROLE] requestedTenant=TESTTENANT"}

Very strange that it is working if you use _bulk_get instead of _bulk_resolve.
_bulk_resolve: Bulk resolve objects API | Kibana Guide [8.4] | Elastic
_bulk_get: Bulk get objects API | Kibana Guide [8.4] | Elastic

Thank you for support.

@mcarlin What was your original ELK version? What is the version of the SG plugin?

Helo pablo,

current searchguard_version : 7.16.3-52.6.0
original version of ELK was 7.10.1

Mickaël

@mcarlin Latest supported SG version for ELK 7.16.3 is 53.0.0. Would you mind trying that?

Hello Pablo,

Can you confirm it is possible to rollback from 53.0.0 to 52.6.0 If we experience any trouble ? We don’t want to add instability to our platform. This access forbidden issue is already giving us a lot of complains from the clients.

Mickaël

Hello Pablo,

We managed to upgrade another ES cluster we have to SG 53.0.0 (both Kibana and ES).

We have the same behaviour, you can see on this screenshot it is not working with the _bulk_resolve API call. It really seems to be a bug in SG.

Mickaël

Hello,

Let me know if you need additional information of action on my side.
We still having the issue.

Thank you

@mcarlin I couldn’t repro your scenario. Could you share your sg_config.yml file?

Did you create different spaces ? indeed we have the issue only when a dabshboard is located into a Kibana space and a tenant

sg_config:

sg_config:
  dynamic:
    filtered_alias_mode: "warn"
    disable_rest_auth: false
    disable_intertransport_auth: false
    respect_request_indices_options: false
    kibana:
      multitenancy_enabled: true
      server_username: "kibanaserver"
      index: ".kibana"
      rbac_enabled: false
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: true
        internalProxies: ".*"
        remoteIpHeader: "X-Forwarded-For"
    authc:
      openid_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 3
        http_authenticator:
          challenge: false
          type: "openid"
          config:
            roles_key: "profile"
            openid_connect_url: "https://our_url/.well-known/openid-configuration"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Authenticate via OpenId Connect"
        skip_users: []
      jwt_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 4
        http_authenticator:
          challenge: false
          type: "jwt"
          config:
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            jwt_header: "Authorization"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Authenticate via Json Web Token"
        skip_users: []
      ldap:
        http_enabled: false
        transport_enabled: false
        order: 5
        http_authenticator:
          challenge: false
          type: "basic"
          config: {}
        authentication_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(sAMAccountName={0})"
        description: "Authenticate via LDAP or Active Directory"
        skip_users: []
      basic_internal_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 2
        http_authenticator:
          challenge: false
          type: "basic"
          config: {}
        authentication_backend:
          type: "intern"
          config: {}
        description: "Authenticate via HTTP Basic against internal users database"
        skip_users: []
      proxy_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 0
        http_authenticator:
          challenge: false
          type: "proxy"
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Authenticate via proxy"
        skip_users: []
      clientcert_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          challenge: false
          type: "clientcert"
          config:
            username_attribute: "cn"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Authenticate via SSL client certificates"
        skip_users: []
      kerberos_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 6
        http_authenticator:
          challenge: true
          type: "kerberos"
          config:
            krb_debug: false
            strip_realm_from_principal: true
        authentication_backend:
          type: "noop"
          config: {}
        skip_users: []
    authz:
      roles_from_another_ldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config: {}
        description: "Authorize via another Active Directory"
        skipped_users: []
      roles_from_myldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            rolebase: "ou=groups,dc=example,dc=com"
            rolesearch: "(member={0})"
            userrolename: "disabled"
            rolename: "cn"
            resolve_nested_roles: true
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(uid={0})"
        description: "Authorize via LDAP or Active Directory"
        skipped_users: []
    auth_failure_listeners: {}
    do_not_fail_on_forbidden: true
    multi_rolespan_enabled: true
    hosts_resolver_mode: "ip-only"
    do_not_fail_on_forbidden_empty: false
_sg_meta:
  type: "config"
  config_version: 2

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.