Multitenancy ELK 7.8

Elasticsearch version:
7.8 + sg plugin 43.0.0
Server OS version:
Centos 7
Kibana version (if relevant):
Kibana 7.8 + kibana sg plugin 43.0.0

There is some problem with creating tenants, no matter how. I try by tenants api, sgadmin and gui.
When I create new tenant it’s always visible in Kibana but then sometimes i can’t switch to this tenant. Sometimes is working like expected sometimes not. When i can’t switch to tenant then i am move to Global.

    filtered_alias_mode: "warn"
    do_not_fail_on_forbidden: true
    do_not_fail_on_forbidden_empty: false
      multitenancy_enabled: true
      server_username: "kibanaserver"
      index: ".kibana"


searchguard.accountinfo.enabled: false
        #password: "<some_string>"
        #secure: true
        #ttl: 0
        #ttl: 0
        keepalive: true
        enabled: true
            enable_global: true
            enable_private: false
           # preferred:
           #     - "*"
           #     - private
        forbidden_usernames: ["kibanaro", "kibanaserver", "logstash", "readall", "remote_monitoring", "snapshotrestore" ]
            #brandimage: <some_string>


Sometimes is working like expected sometimes not.

Did you see any error in the Elasticsearch or Kibana log when this happened? Any error in the browser console log? How many tenants do you have?

Hi ,
I have around 110 tenants. I didn’t find nothing special in logs but i will take a closer look and try send some logs. There is no errors in browser
It seems that problem appears when I use tenant API from sgadmin or Devtools.
Sometimes I saw in kibana logs that user even if don’t have privileges to Global try to switch to SGS_GLOBAL_TENANT.
Try to create for example 50 tenants from sgadmin and switch to them from user with SGS_ALL_ACCESS role.

I have another suspicion that is something wrong with switching between tenants from Kibana Gui.
For example I create two new tenants test1 and test2 then i can’t switch between tenants.
I am in tenant test1 and then try to switch to test2 and i am still in tenant test1. When i select tenant test2 and then logout/login then selected tenant is test2.

Hi. I tested more and saw a similar behaviour. We have to look more into it. I’ll get back to you soon. Thanks for reporting this.

Ok thanks a lot.
It’s really important future for me :slight_smile:

Hi, any update with this issue? its a main blocker migration from 6 version to 7

Hi @Patryk. It is in the high priority queue. I’ll text you when it is ready.

Hi @srgbnd,
any update with this issue?

Hi @marwojt. It is in the next release (October) queue. If the fix will be ready sooner, we do a build for you before the release.

What minor version do you use, 7.8.1?

We use version 7.8.0. Thanks a lot @srgbnd

Hi @marwojt,

Would it be possible for you to test with the following setting in kibana.yml?

searchguard.auth.disable_authinfo_cache: true

Does that make any difference?

Best Regards

Hi @Mike,
i have tested this settings and it works. I can change tenants now. Thanks a lot

But i think it’s only temp solution. Any info about date of this release ?

Hi @Mike, @srgbnd,
i’ve seen that you release new SG version for elastic 7.9.x.
Will You make a new package for elastic 7.8.x?

@marwojt Hi. This is a different minor version. Kibana usually introduces breaking changes in the new minor versions. Thus porting back the patch might not be straightforward.

Why do you want to port the cache fix back? Didn’t the option searchguard.auth.disable_authinfo_cache: true work for you?

Hi @srgbnd,
it works but i am worry about performance. We have a lot off users.

Do you experience any performance issues now?

Hi @Patryk and @marwojt,

Actually this will be the default and only behaviour from now (i.e. the latest release) on. We removed the caching on the Kibana side completely, so the setting you used will be deprecated.

The caching didn’t really make much sense at this point, and it led to a bunch of negative side effects.
In short: no need to worry about this being a temporary solution, it is permanent.

But as @srgbnd mentioned - please let us know if this somehow results in performance issues!

Best regards,