Proxy authentication with 6.8.2

The big picture:

Nine months ago we were granted an academic license for Search Guard and began a pilot project. This has gone well and the academic organization providing funding recently noticed that there is a market for what we do. The pilot system is going to be split into the existing academic effort and a commercial effort that will require paid licensing.

The system is a mix of Ubuntu 18.04 and Debian 9.9 that is going to converge on Debian 10 as time allows. We have OpenJDK 8 (8u222-b10-1ubuntu1~18.04.1) on two systems and the rest have OpenJDK 11 (11.0.4+11-1~deb10u1) on the rest. This will converge on OpenJDK 11 over the next month. We employ basic authentication and that is unlikely to change in the next year.

Our system is behind the Cloudflare CDN and this has worked well for us. We formerly had an Apache reverse proxy and someone familiar with that system. That person got busy and all the examples seem to be nginx, so we switched to that a few days ago.

Topology wise we have the following:

nginx proxy at 192.168.18.123 that handles all incoming traffic on 80.
kibana instance on 192.168.18.62 that is production.
kibana instance on 192.168.18.63 that can be used for testing.

nginx config is like so:

 server {
            listen 80;
            server_name ls.netwarsystem.com;
            access_log   /var/log/nginx/ls.access.log;
            location / {
                    proxy_pass      http://192.168.18.62:5601;
                    }
    }
    server {
            listen 80;
            server_name zeno.netwarsystem.com;
            access_log   /var/log/nginx/zeno.access.log;
            location / {
                    proxy_pass      http://192.168.18.63:5601;
                    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded$
                    proxy_set_header x-proxy-user admin;
                    proxy_set_header x-proxy-roles admin;
                    }
    }

Current config:

sg_config.yml

http:
anonymous_auth_enabled: true
xff:
enabled: true
#being generous here, should be just 123 on the list, right?
internalProxies: ‘192.168.18.123|192.168.18.62|192.168.18.63|127.0.0.1’
#internalProxies: ‘.*’ # trust all internal proxies, regex pattern
remoteIpHeader: ‘x-forwarded-for’
proxiesHeader: ‘x-forwarded-by’

kibana.yml

searchguard.auth.anonymous_auth_enabled: true
elasticsearch.requestHeadersWhitelist: [“Authorization”, “sgtenant”, “x-proxy-user”, “x-proxy-roles”]

The problem(s):

And after that, things get a bit fuzzy. There are subtle differences between the 6.x configurations and 7.x - some attributes in 7.x have an underscore in their names, but this was not the case in 6.x. The goal here is that multiple remote web sites can embed a visualization in an iframe and have it accessible. The nginx proxy will have to add whatever is needed for Search Guard to determine appropriate permissions and serve up the content. This seems to be a bit different than how many users employ proxy auth.

I spent some time on this last spring, finally giving up and moving on to other tasks. Now we are at a point where this has to be solved. I am going to give one more focused try and if I can’t knock it off in the next week, I am going to be looking for someone who can consult and hour or two and make this do what we need.

There have been changes to the SG configuration syntax, yes, but especially sg_config.yml staid much the same.

The only new property that has an underscore would be:

_sg_meta:
  type: "config"
  config_version: 2

Which is really just an identifier regarding what type of config this file contains.

You can also have a look at the upgrade guide from 6 to 7. When you upgrade to 7, the configs are migrated to the new format automatically under-the-hood. There is also a new feature in sgadmin where you can migrate your existing config files from 6 to 7:

So, basically the upgrade should be really straight forward.

Regarding the iFrame question: The question here is on what data the permissions should be applied? Is it the URL of the website that embeds the dashboard? Some kind of security token like JWT?

One idea is that nginx sets the proxy-user and proxy-roles dynamically, based on some rules that apply to your use-case. Another option is that nginx dynamically adds HTTP Basic Authentication headers. Since you already have proxy authentication up and running, the first option seems to be the easiest.

If you could elaborate a bit more on your concrete use-case and on what basis the requests should be authenticated, I can probably suggest some more specific configuration settings and hints.

For licensing and consulting, please drop us an email via info@search-guard.com. It will be forwarded to the right people on the team accordingly.

Search Guard does fancy things with load balancing and the Elasticsearch port 9200 service itself. All I want to proxy is Kibana on 5601, and all that is working is basic nginx reverse proxy. Those using the system right now must provide a username/password to get into it.

Understanding the use case might make this easier to explain - we stream Twitter content for federal level legislators in the FVEYS countries, some of the larger NATO members, and Mexico. We have a dozen analysts that have various specialties who work in a horizontal fashion across the various indices. Our transition from academic to commercial involves exposing both vertical aspects, the activity for a single country, and horizontal aspects, such as influence operations or hate speech.

A concrete example - National Observer is a Canadian publication with a single person who has access to our system. They would like to take the things they have uncovered with the help of our analysts and embed them in their web site to enrich an article. There is only one account at work here - their single staffer using Kibana, and that staff account’s credentials being passed by proxy would do what we need.

I can see where we might need more complex authorization schemes in the future, but for right now if I could permit three or four web sites to embed content by using the same credentials as the Kibana account that created the visualizations, that would be an acceptable proof of concept for our funders.

Built myself an Elasticsearch/Kibana 7.3.1 VM to isolate the authorization problems from everything else. Feels like this is a bit closer to working, but still no joy. I wrote it up and posted to Elastic’s forum.

I dropped you an email, let’s take the consulting and licensing discussion off channel.

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.