I have a 3 master only nodes sat behind a load balancer in AWS.
If I run:
curl -k -E admin.pem --key admin.key https://<aws_public_ip>:9200/_cluster/health?pretty
I get the normal health response. If I run:
curl -k -E admin.pem --key admin.key https://<route53_cname>/_cluster/health?pretty
503 Service Temporarily Unavailable
username_attribute: cn #optional, if omitted DN becomes username
I have read the docs for each of the config sections, but I find it very hard to relate that to my specific system, so I’m mostly using trial and error.
What I want is for SG to manage the basic auth for me. I want everything (curl, kibana & logstash) to talk to the load balancer and that just forwards the messages to one of the master nodes.
I am finding the docs lacking in examples.
All these authc declarations were given set in the default file but I don’t understand which one applies to which use case, so I left them all in.
- Examples of the calls that would be serviced by each section may help
I don’t think I need an authz section for my case, but maybe, and if so I have no idea what the config would be.
Is intern the correct type to use? They are all noop by default.
I am finding it very difficult to get this going and from reading many of the forum posts I don’t think I’m the only one.
I think the simplest option would be a debug mode where SG acted as though it wasn’t installed and instead returned the full path through its config to the result you would get.