Kibana cookie max length

Hello!

I hope you could help us, we using kibana multitenant access with searchguard+ldap auth, and found a rare issue with tenant selection.
Some users can’t change tenant, however kibana says it’s selected but they need to logout and login again to actually seeing that tenant.
Our guess is could be a problem if a user have lot of groups in ldap, so searchguard try to put them in user cookie (not sure, maybe searchguard_storage cookie?), but if the cookie size is too big the browser can’t handle that?

Is that already handled in searchguard if cookie size greater than 4096 byte (mostly that is the limit as far as I know), maybe splitting data into multiple cookies or something else?

Also how can I be sure, that this is the root cause, is there a method to test is, or something in the logs to check?

We got this tenant change problem after upgrade to elastic/kibana version 7.7.1 and searchguard 42.0.0 but could be irrelevant to upgrades, because few user affected only, maybe they just noiticed this now.

Thank you very much!

I can confirm, the problem is the number of groups searchguard collects for a user. Changed ldap authz rolesearch to collect only kibana related groups for the user (there are others too for most users) and only with that modification it works if user has no more than 35-40 kibana groups (tenants). That user with same kibana groups but without filtering to get only kibana groups won’t works, because with the extra groups can’t fit in the cookie size. Of course number of working groups depend on group name lengths.

Please coosider what will be the correct way to handle this limitations. I’m no expert with cookies and not sure why cookie store all groups instead of selected ones only, but maybe split into separate cookies if it’s size too big?

Thanks!

SG can’t split the cookie.

What is the max number of tenants per user do you have?
You have one tenant per ldap group, haven’t you?
What is the use case for multitenancy in your setup? Do you need it? I mean SG LDAP authentication and authorization depends on the fetched LDAP groups that you map to users. And you can do this without multitenancy.

Currently max number of tenant for one user is 43.
We have 2 groups per tenant, one for read only the other is read-write, users have one of it and not both, so the user who have 43 tenant also is in 43 group.

We have around 150 tenants and 300+ users, lot of them have access to multiple tenants because we storing logs in elastic from multiple sources and log owners have rw role they know how to filter and show their logs for the best and others who have ro access can read it.
To debug a complex problem that affect multiple applications with multiple log sources they need access to multiple tenants (with read-only or read-write rights).

Ok, I see. We have 2 problems here:

  1. The cookie size.
  2. The cookie cache. That’s why your users have to logout and login again to see the tenant.

We need to investigate. I’ll create a bug for this.

1 Like

Hi. We added an option to disable the cookie cache in the latest release. Could you please try it?

kibana.yml

searchguard.auth.disable_authinfo_cache: true
1 Like

We just tested that parameter, seems working even when user is in 150 tenants too.

What negativ impact could this cause? I guess there will be more ldap request when user logs in or try to change tenant, whiches could cause maybe extra load slower auth but is there anything else we should consider?

Thank you very much!

Very nice, thanks for testing!

Yes, instead of returning that information from the browser cookie, we request it from the backend when needed. This happens a few times, unfortunately not only on login or when changing tenants, so there will be more requests. Admittedly, I haven’t tested with your setup, but we’ve tested with making 10-15 times as many requests per page load, and it worked fine.

Other than that, I don’t think there’s anything else you need to consider, but please let us know if you notice any negative effects of this setting. We have some more improvements in the backlog, but I didn’t want to rush anything…

Best Regards
Mike

I forgot to mention - there’s also a cache on the backend side, so I think you should be good :+1:

Thanks for the intel and the quick fix.
We tested it separetly but seams good so hoping for best.
If we have any problem with it will reply here.

Thank you very much!

2 Likes