kibana cannot connect to elasticsearch correctly with "no trusted proxies" exception

i’m really confused by the proxy settings, here’s the exception i get on kibana:

Fatal Error

Kibana: Unknown error while connecting to Elasticsearch

    Error: UnknownHostException[No trusted proxies]
at respond (https://127.0.0.1:9506/index.js?_b=5930:81566:15)
at checkRespForFailure (https://127.0.0.1:9506/index.js?_b=5930:81534:7)
at https://127.0.0.1:9506/index.js?_b=5930:80203:7
at wrappedErrback (https://127.0.0.1:9506/index.js?_b=5930:20882:78)
at wrappedErrback (https://127.0.0.1:9506/index.js?_b=5930:20882:78)
at wrappedErrback (https://127.0.0.1:9506/index.js?_b=5930:20882:78)
at https://127.0.0.1:9506/index.js?_b=5930:21015:76
at Scope.$eval (https://127.0.0.1:9506/index.js?_b=5930:22002:28)
at Scope.$digest (https://127.0.0.1:9506/index.js?_b=5930:21814:31)
at Scope.$apply (https://127.0.0.1:9506/index.js?_b=5930:22106:24)

kibana and elasticsearch are in different docker containers.

~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

2ebe74412fff data_manager/elastic:v2 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9504->9504/tcp, 0.0.0.0:9505->9505/tcp config_elastic0_1

6c1b0ae19fd6 data_manager/nginx:v1 "nginx -g 'daemon of 7 minutes ago Up 7 minutes 80/tcp, 443/tcp, 0.0.0.0:9506->9550/tcp config_gatewayNginx_1

0444d4152255 data_manager/kafka:v1 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9500->9500/tcp, 0.0.0.0:9501->9501/tcp, 0.0.0.0:9502->9502/tcp, 0.0.0.0:9503->9503/tcp config_kafka0_1

39c628f175a1 data_manager/auth:v1 "/usr/share/google_a 7 minutes ago Up 7 minutes config_gatewayAuth_1

3d61751a00f4 data_manager/kibana:v1 "/usr/share/kibana/k 8 minutes ago Up 8 minutes config_gatewayKibana_1

ceb53bffbc57 data_manager/logstash:v1 "/usr/share/logstash 8 minutes ago Up 8 minutes config_gatewayLogstash_1

kibana’s ip and ports are not exposed because user has to access it via nginx and auth proxies. nginx proxy applies tls and auth proxy uses google_auth_proxy which basically authenticates via oauth2 and sets a cookie. after that user can access kibana. kibana uses a set of username/password defined in kibana.yml to connect to elasticsearch.

there isn’t any proxy between kibana and elasticsearch. so what am i doing wrong? what does it mean by no trusted proxies? which proxy is it referring to? or what kind of extra settings do i have to set in elasticsearch to allow kibana’s connection?

can you pls share your search guard config (after stripping sensitive parts)

···

Am Freitag, 5. Juni 2015 11:56:50 UTC+2 schrieb Lingxiao Xia:

i’m really confused by the proxy settings, here’s the exception i get on kibana:

Fatal Error

Kibana: Unknown error while connecting to Elasticsearch

    Error: UnknownHostException[No trusted proxies]
at respond ([https://127.0.0.1:9506/index.js?_b=5930:81566:15](https://127.0.0.1:9506/index.js?_b=5930:81566:15)    )
at checkRespForFailure ([https://127.0.0.1:9506/index.js?_b=5930:81534:7](https://127.0.0.1:9506/index.js?_b=5930:81534:7)    )
at [https://127.0.0.1:9506/index.js?_b=5930:80203:7](https://127.0.0.1:9506/index.js?_b=5930:80203:7)
    at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at [https://127.0.0.1:9506/index.js?_b=5930:21015:76](https://127.0.0.1:9506/index.js?_b=5930:21015:76)
    at Scope.$eval ([https://127.0.0.1:9506/index.js?_b=5930:22002:28](https://127.0.0.1:9506/index.js?_b=5930:22002:28)    )
at Scope.$digest ([https://127.0.0.1:9506/index.js?_b=5930:21814:31](https://127.0.0.1:9506/index.js?_b=5930:21814:31)    )
at Scope.$apply ([https://127.0.0.1:9506/index.js?_b=5930:22106:24](https://127.0.0.1:9506/index.js?_b=5930:22106:24))

kibana and elasticsearch are in different docker containers.

~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

2ebe74412fff data_manager/elastic:v2 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9504->9504/tcp, 0.0.0.0:9505->9505/tcp config_elastic0_1

6c1b0ae19fd6 data_manager/nginx:v1 "nginx -g 'daemon of 7 minutes ago Up 7 minutes 80/tcp, 443/tcp, 0.0.0.0:9506->9550/tcp config_gatewayNginx_1

0444d4152255 data_manager/kafka:v1 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9500->9500/tcp, 0.0.0.0:9501->9501/tcp, 0.0.0.0:9502->9502/tcp, 0.0.0.0:9503->9503/tcp config_kafka0_1

39c628f175a1 data_manager/auth:v1 "/usr/share/google_a 7 minutes ago Up 7 minutes config_gatewayAuth_1

3d61751a00f4 data_manager/kibana:v1 "/usr/share/kibana/k 8 minutes ago Up 8 minutes config_gatewayKibana_1

ceb53bffbc57 data_manager/logstash:v1 "/usr/share/logstash 8 minutes ago Up 8 minutes config_gatewayLogstash_1

kibana’s ip and ports are not exposed because user has to access it via nginx and auth proxies. nginx proxy applies tls and auth proxy uses google_auth_proxy which basically authenticates via oauth2 and sets a cookie. after that user can access kibana. kibana uses a set of username/password defined in kibana.yml to connect to elasticsearch.

there isn’t any proxy between kibana and elasticsearch. so what am i doing wrong? what does it mean by no trusted proxies? which proxy is it referring to? or what kind of extra settings do i have to set in elasticsearch to allow kibana’s connection?

here’s the safeguard setting:

···

#############################################################################################

SEARCH GUARD

#############################################################################################

searchguard.enabled: true

searchguard.key_path: /tmp/dldm/elasticsearchConfig/searchguard_node.key

searchguard.check_for_root: false

searchguard.allow_all_from_loopback: true

searchguard.authentication.authentication_backend.impl: com.floragunn.searchguard.authentication.backend.simple.SettingsBasedAuthenticationBackend

searchguard.authentication.authentication_backend.cache.enable: true

searchguard.authentication.authorizer.impl: com.floragunn.searchguard.authorization.simple.SettingsBasedAuthorizator

searchguard.authentication.authorizer.cache.enable: true

searchguard.authentication.http_authenticator.impl: com.floragunn.searchguard.authentication.http.basic.HTTPBasicAuthenticator

searchguard.authentication.settingsdb.user.admin: password

searchguard.authentication.settingsdb.user.marketing: password

searchguard.authentication.authorization.settingsdb.roles.admin: [“admin”]

searchguard.authentication.authorization.settingsdb.roles.marketing: [“marketing”]

searchguard.flsfilter.names: [“marketing”]

searchguard.flsfilter.marketig.source_excludes: [“username”,“email”]

here’s the request that caused the problem where https://127.0.0.1:9506 is the access point for kibana:

General

  1. Remote Address:

127.0.0.1:9506

  1. Request URL:

https://127.0.0.1:9506/elasticsearch/

  1. Request Method:

GET

  1. Status Code:

500 Internal Server Error

  1. Response Headersview source
  2. Connection:

keep-alive

  1. Content-Length:

65

  1. Content-Type:

application/json; charset=UTF-8

  1. Date:

Fri, 05 Jun 2015 10:30:48 GMT

  1. Server:

nginx/1.9.0

  1. X-App-Name:

kibana

  1. Request Headersview source
  2. Accept:

application/json, text/plain, /

  1. Accept-Encoding:

gzip, deflate, sdch

  1. Accept-Language:

en-US,en;q=0.8

  1. Authorization:

Basic YWRtaW46dGVjaF9kdW5nZW9u

  1. Connection:

keep-alive

  1. Cookie:

_oauthproxy=bGluZ3hpYW8ueGlhQGRyYWdvbmxhdy5jb20uaGs=|1433500084|r5f4GjGLA-hwTTm-iO1Z5I6QEOw=

  1. Host:

127.0.0.1:9506

  1. Referer:

https://127.0.0.1:9506/

  1. User-Agent:

Mozilla/5.0 (X11; Linux i686 (x86_64)) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36

On Friday, June 5, 2015 at 6:17:54 PM UTC+8, in...@search-guard.com wrote:

can you pls share your search guard config (after stripping sensitive parts)

Am Freitag, 5. Juni 2015 11:56:50 UTC+2 schrieb Lingxiao Xia:

i’m really confused by the proxy settings, here’s the exception i get on kibana:

Fatal Error

Kibana: Unknown error while connecting to Elasticsearch

    Error: UnknownHostException[No trusted proxies]
at respond ([https://127.0.0.1:9506/index.js?_b=5930:81566:15](https://127.0.0.1:9506/index.js?_b=5930:81566:15)    )
at checkRespForFailure ([https://127.0.0.1:9506/index.js?_b=5930:81534:7](https://127.0.0.1:9506/index.js?_b=5930:81534:7)    )
at [https://127.0.0.1:9506/index.js?_b=5930:80203:7](https://127.0.0.1:9506/index.js?_b=5930:80203:7)
    at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at [https://127.0.0.1:9506/index.js?_b=5930:21015:76](https://127.0.0.1:9506/index.js?_b=5930:21015:76)
    at Scope.$eval ([https://127.0.0.1:9506/index.js?_b=5930:22002:28](https://127.0.0.1:9506/index.js?_b=5930:22002:28)    )
at Scope.$digest ([https://127.0.0.1:9506/index.js?_b=5930:21814:31](https://127.0.0.1:9506/index.js?_b=5930:21814:31)    )
at Scope.$apply ([https://127.0.0.1:9506/index.js?_b=5930:22106:24](https://127.0.0.1:9506/index.js?_b=5930:22106:24))

kibana and elasticsearch are in different docker containers.

~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

2ebe74412fff data_manager/elastic:v2 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9504->9504/tcp, 0.0.0.0:9505->9505/tcp config_elastic0_1

6c1b0ae19fd6 data_manager/nginx:v1 "nginx -g 'daemon of 7 minutes ago Up 7 minutes 80/tcp, 443/tcp, 0.0.0.0:9506->9550/tcp config_gatewayNginx_1

0444d4152255 data_manager/kafka:v1 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9500->9500/tcp, 0.0.0.0:9501->9501/tcp, 0.0.0.0:9502->9502/tcp, 0.0.0.0:9503->9503/tcp config_kafka0_1

39c628f175a1 data_manager/auth:v1 "/usr/share/google_a 7 minutes ago Up 7 minutes config_gatewayAuth_1

3d61751a00f4 data_manager/kibana:v1 "/usr/share/kibana/k 8 minutes ago Up 8 minutes config_gatewayKibana_1

ceb53bffbc57 data_manager/logstash:v1 "/usr/share/logstash 8 minutes ago Up 8 minutes config_gatewayLogstash_1

kibana’s ip and ports are not exposed because user has to access it via nginx and auth proxies. nginx proxy applies tls and auth proxy uses google_auth_proxy which basically authenticates via oauth2 and sets a cookie. after that user can access kibana. kibana uses a set of username/password defined in kibana.yml to connect to elasticsearch.

there isn’t any proxy between kibana and elasticsearch. so what am i doing wrong? what does it mean by no trusted proxies? which proxy is it referring to? or what kind of extra settings do i have to set in elasticsearch to allow kibana’s connection?

seems there is a X-Forwarded-For header present (set by your proxy).
Look here

···

#############################################################################################

X-Forwarded-For (XFF) header

#############################################################################################

X-Forwarded-For (XFF) header

If you have a http proxy in front of elasticsearch you have to configure this options to handle XFF properly

#searchguard.http.xforwardedfor.header: X-Forwarded-For

#searchguard.http.xforwardedfor.trustedproxies: null

#searchguard.http.xforwardedfor.enforce: false

either set searchguard.http.xforwardedfor.trustedproxies

like [“192.168.1.1”,“192.168.1.2”,“193.54.55.21”]

or disable xff by setting searchguard.http.xforwardedfor.header: null (its enabled by default)

see also http://en.wikipedia.org/wiki/X-Forwarded-For

Am Freitag, 5. Juni 2015 12:38:23 UTC+2 schrieb Lingxiao Xia:

here’s the safeguard setting:

#############################################################################################

SEARCH GUARD

#############################################################################################

searchguard.enabled: true

searchguard.key_path: /tmp/dldm/elasticsearchConfig/searchguard_node.key

searchguard.check_for_root: false

searchguard.allow_all_from_loopback: true

searchguard.authentication.authentication_backend.impl: com.floragunn.searchguard.authentication.backend.simple.SettingsBasedAuthenticationBackend

searchguard.authentication.authentication_backend.cache.enable: true

searchguard.authentication.authorizer.impl: com.floragunn.searchguard.authorization.simple.SettingsBasedAuthorizator

searchguard.authentication.authorizer.cache.enable: true

searchguard.authentication.http_authenticator.impl: com.floragunn.searchguard.authentication.http.basic.HTTPBasicAuthenticator

searchguard.authentication.settingsdb.user.admin: password

searchguard.authentication.settingsdb.user.marketing: password

searchguard.authentication.authorization.settingsdb.roles.admin: [“admin”]

searchguard.authentication.authorization.settingsdb.roles.marketing: [“marketing”]

searchguard.flsfilter.names: [“marketing”]

searchguard.flsfilter.marketig.source_excludes: [“username”,“email”]

here’s the request that caused the problem where https://127.0.0.1:9506 is the access point for kibana:

General

  1. Remote Address:

127.0.0.1:9506

  1. Request URL:

https://127.0.0.1:9506/elasticsearch/

  1. Request Method:

GET

  1. Status Code:

500 Internal Server Error

  1. Response Headersview source
  2. Connection:

keep-alive

  1. Content-Length:

65

  1. Content-Type:

application/json; charset=UTF-8

  1. Date:

Fri, 05 Jun 2015 10:30:48 GMT

  1. Server:

nginx/1.9.0

  1. X-App-Name:

kibana

  1. Request Headersview source
  2. Accept:

application/json, text/plain, /

  1. Accept-Encoding:

gzip, deflate, sdch

  1. Accept-Language:

en-US,en;q=0.8

  1. Authorization:

Basic YWRtaW46dGVjaF9kdW5nZW9u

  1. Connection:

keep-alive

  1. Cookie:

_oauthproxy=bGluZ3hpYW8ueGlhQGRyYWdvbmxhdy5jb20uaGs=|1433500084|r5f4GjGLA-hwTTm-iO1Z5I6QEOw=

  1. Host:

127.0.0.1:9506

  1. Referer:

https://127.0.0.1:9506/

  1. User-Agent:

Mozilla/5.0 (X11; Linux i686 (x86_64)) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/43.0.2357.81 Safari/537.36

On Friday, June 5, 2015 at 6:17:54 PM UTC+8, in...@search-guard.com wrote:

can you pls share your search guard config (after stripping sensitive parts)

Am Freitag, 5. Juni 2015 11:56:50 UTC+2 schrieb Lingxiao Xia:

i’m really confused by the proxy settings, here’s the exception i get on kibana:

Fatal Error

Kibana: Unknown error while connecting to Elasticsearch

    Error: UnknownHostException[No trusted proxies]
at respond ([https://127.0.0.1:9506/index.js?_b=5930:81566:15](https://127.0.0.1:9506/index.js?_b=5930:81566:15)    )
at checkRespForFailure ([https://127.0.0.1:9506/index.js?_b=5930:81534:7](https://127.0.0.1:9506/index.js?_b=5930:81534:7)    )
at [https://127.0.0.1:9506/index.js?_b=5930:80203:7](https://127.0.0.1:9506/index.js?_b=5930:80203:7)
    at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at wrappedErrback ([https://127.0.0.1:9506/index.js?_b=5930:20882:78](https://127.0.0.1:9506/index.js?_b=5930:20882:78)    )
at [https://127.0.0.1:9506/index.js?_b=5930:21015:76](https://127.0.0.1:9506/index.js?_b=5930:21015:76)
    at Scope.$eval ([https://127.0.0.1:9506/index.js?_b=5930:22002:28](https://127.0.0.1:9506/index.js?_b=5930:22002:28)    )
at Scope.$digest ([https://127.0.0.1:9506/index.js?_b=5930:21814:31](https://127.0.0.1:9506/index.js?_b=5930:21814:31)    )
at Scope.$apply ([https://127.0.0.1:9506/index.js?_b=5930:22106:24](https://127.0.0.1:9506/index.js?_b=5930:22106:24))

kibana and elasticsearch are in different docker containers.

~$ sudo docker ps

CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

2ebe74412fff data_manager/elastic:v2 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9504->9504/tcp, 0.0.0.0:9505->9505/tcp config_elastic0_1

6c1b0ae19fd6 data_manager/nginx:v1 "nginx -g 'daemon of 7 minutes ago Up 7 minutes 80/tcp, 443/tcp, 0.0.0.0:9506->9550/tcp config_gatewayNginx_1

0444d4152255 data_manager/kafka:v1 "/usr/bin/supervisor 7 minutes ago Up 7 minutes 0.0.0.0:9500->9500/tcp, 0.0.0.0:9501->9501/tcp, 0.0.0.0:9502->9502/tcp, 0.0.0.0:9503->9503/tcp config_kafka0_1

39c628f175a1 data_manager/auth:v1 "/usr/share/google_a 7 minutes ago Up 7 minutes config_gatewayAuth_1

3d61751a00f4 data_manager/kibana:v1 "/usr/share/kibana/k 8 minutes ago Up 8 minutes config_gatewayKibana_1

ceb53bffbc57 data_manager/logstash:v1 "/usr/share/logstash 8 minutes ago Up 8 minutes config_gatewayLogstash_1

kibana’s ip and ports are not exposed because user has to access it via nginx and auth proxies. nginx proxy applies tls and auth proxy uses google_auth_proxy which basically authenticates via oauth2 and sets a cookie. after that user can access kibana. kibana uses a set of username/password defined in kibana.yml to connect to elasticsearch.

there isn’t any proxy between kibana and elasticsearch. so what am i doing wrong? what does it mean by no trusted proxies? which proxy is it referring to? or what kind of extra settings do i have to set in elasticsearch to allow kibana’s connection?

i’m really confused by the proxy settings. If i try to disable XFF like this:
searchguard.http.xforwardedfor.header: null
#searchguard.http.xforwardedfor.trustedproxies: null
#searchguard.http.xforwardedfor.enforce: true

i’v got this exception: {“error”:“UnknownHostException[No trusted proxies]”,“status”:500}

Were also tested, and other options:
searchguard.http.xforwardedfor.header: X-Forwarded-For
searchguard.http.xforwardedfor.trustedproxies: [“10.0.0.171”,“10.0.0.172”,“10.0.0.173”]
searchguard.http.xforwardedfor.enforce: false

the result is always the same: {“error”:“UnknownHostException[Not all proxies are trusted]”,“status”:500}

Logical scheme:
Client request from browser->proxy_nginx1(10.0.0.172:80)->ES(10.0.0.171:9200).
Definitions:
elastic.somedomain.com=proxy_nginx1=10.0.0.172

My configs:
Kibana config.js:

elasticsearch: {server: “http://elastic.somedomain.com:80”, withCredentials: true},

_config_end

ElasticSearch config:

http.cors.enabled: true
http.cors.allow-origin: “/.*/”
http.cors.allow-credentials: true

searchguard.enabled: true
searchguard.key_path: /usr/share/elasticsearch/plugins/search-guard/
searchguard.rewrite_get_as_search: true
searchguard.config_index_name: searchguard
searchguard.auditlog.enabled: true
searchguard.check_for_root: true
searchguard.allow_all_from_loopback: false
searchguard.http.xforwardedfor.header: X-Forwarded-For
searchguard.http.xforwardedfor.trustedproxies: [“10.0.0.171”,“10.0.0.172”,“10.0.0.173”]
searchguard.authentication.authentication_backend.impl: com.floragunn.searchguard.authentication.backend.ldap.LDAPAuthenticationBackend
searchguard.authentication.authorizer.impl: com.floragunn.searchguard.authorization.ldap.LDAPAuthorizator
searchguard.authentication.ldap.host: [“ldap.somedomain.com:636”]
searchguard.authentication.ldap.ldaps.ssl.enabled: true
searchguard.authentication.ldap.ldaps.starttls.enabled: true
searchguard.authentication.ldap.ldaps.truststore_type: JKS
searchguard.authentication.ldap.ldaps.truststore_filepath: /etc/elasticsearch/logs_master.jks
searchguard.authentication.ldap.ldaps.truststore_password: XXXXXX
searchguard.authentication.ldap.bind_dn: cn=readuser,dc=somedomain,dc=com
searchguard.authentication.ldap.password: XXXXXYYYYZZZZ
searchguard.authentication.ldap.userbase: “ou=people,dc=somedomain,dc=com”
searchguard.authentication.ldap.usersearch: (uid={0})
searchguard.authentication.ldap.username_attribute: uid
searchguard.authentication.authorization.ldap.rolebase: “ou=groups,dc=somedomain,dc=com”
searchguard.authentication.authorization.ldap.rolesearch: (member={0})
searchguard.authentication.authorization.ldap.userroleattribute: memberuid
searchguard.authentication.authorization.ldap.userrolename: memberOf
searchguard.authentication.authorization.ldap.rolename: cn
searchguard.authentication.authorization.ldap.resolve_nested_roles: true
searchguard.actionrequestfilter.names: [“readonly”]
searchguard.actionrequestfilter.readonly.allowed_actions: [“indices:data/read/",
searchguard.actionrequestfilter.readonly.forbidden_actions: ["cluster:admin
”, “indices:data/write*”]

My ACL rules:
{
“acl”: [
{

Comment”: “By default no filters are executed and no filters a by-passed. In such a case a exception is throws an access will be denied.”,
“filters_bypass”: ,
“filters_execute”:
},
{

Comment”: “For elastic_admin role all filters are bypassed (so none will be executed) for all indices. This means unrestricted access at all for this role.”,
“roles”: [“elastic_admin”],
“indices”: [""],
“filters_bypass”: ["
"],
“filters_execute”:
},
{

Comment”: “For authenticated users with role ‘elastic_user’ who access the index ‘.kibana’ and 'logstash-’ for this access filter readonly will be executed.",
“roles”: [“elastic_user”],
“indices”: [“kibana”,"logstash
”],
“filters_bypass”: ,
“filters_execute”: ["*"]
},
{

Comment”: “For authenticated user with role ‘elastic_kibana’ who access the index ‘.kibana’ and 'logstash-’ for this access all filters are bypassed (so none will be executed).",
“roles”: [“elastic_kibana”],
“indices”: [“kibana”,"logstash
”],
“filters_bypass”: ["*"],
“filters_execute”:
},
{

Comment”: “For authenticated user with role ‘elastic_fluentd’ who access the index 'logstash-’ for this access all filters are bypassed (so none will be executed).",
“roles”: [“elastic_fluentd”],
“indices”: ["logstash
”],
“filters_bypass”: ["*"],
“filters_execute”:
}
]
}

Kibana version: 3.1.2
ElasticSearch version: 1.5.2

···

Thanks for any kind assistance.

pls try

searchguard.http.xforwardedfor.header: DUMMY

to disable XFF (normally setting it to null should to the trick but this seems also not working for me)

···

Am 10.06.2015 um 13:56 schrieb Igor Feklin <igor.feklin@gmail.com>:

i'm really confused by the proxy settings. If i try to disable XFF like this:
searchguard.http.xforwardedfor.header: null
#searchguard.http.xforwardedfor.trustedproxies: null
#searchguard.http.xforwardedfor.enforce: true

i'v got this exception: {"error":"UnknownHostException[No trusted proxies]","status":500}

Were also tested, and other options:
searchguard.http.xforwardedfor.header: X-Forwarded-For
searchguard.http.xforwardedfor.trustedproxies: ["10.0.0.171","10.0.0.172","10.0.0.173"]
searchguard.http.xforwardedfor.enforce: false

the result is always the same: {"error":"UnknownHostException[Not all proxies are trusted]","status":500}

Logical scheme:
Client request from browser->proxy_nginx1(10.0.0.172:80)->ES(10.0.0.171:9200).
Definitions:
elastic.somedomain.com=proxy_nginx1=10.0.0.172

My configs:
Kibana config.js:
...
     elasticsearch: {server: "http://elastic.somedomain.com:80", withCredentials: true},
...
_config_end

ElasticSearch config:

http.cors.enabled: true
http.cors.allow-origin: "/.*/"
http.cors.allow-credentials: true
searchguard.enabled: true
searchguard.key_path: /usr/share/elasticsearch/plugins/search-guard/
searchguard.rewrite_get_as_search: true
searchguard.config_index_name: searchguard
searchguard.auditlog.enabled: true
searchguard.check_for_root: true
searchguard.allow_all_from_loopback: false
searchguard.http.xforwardedfor.header: X-Forwarded-For
searchguard.http.xforwardedfor.trustedproxies: ["10.0.0.171","10.0.0.172","10.0.0.173"]
searchguard.authentication.authentication_backend.impl: com.floragunn.searchguard.authentication.backend.ldap.LDAPAuthenticationBackend
searchguard.authentication.authorizer.impl: com.floragunn.searchguard.authorization.ldap.LDAPAuthorizator
searchguard.authentication.ldap.host: ["ldap.somedomain.com:636"]
searchguard.authentication.ldap.ldaps.ssl.enabled: true
searchguard.authentication.ldap.ldaps.starttls.enabled: true
searchguard.authentication.ldap.ldaps.truststore_type: JKS
searchguard.authentication.ldap.ldaps.truststore_filepath: /etc/elasticsearch/logs_master.jks
searchguard.authentication.ldap.ldaps.truststore_password: XXXXXX
searchguard.authentication.ldap.bind_dn: cn=readuser,dc=somedomain,dc=com
searchguard.authentication.ldap.password: XXXXXYYYYZZZZ
searchguard.authentication.ldap.userbase: "ou=people,dc=somedomain,dc=com"
searchguard.authentication.ldap.usersearch: (uid={0})
searchguard.authentication.ldap.username_attribute: uid
searchguard.authentication.authorization.ldap.rolebase: "ou=groups,dc=somedomain,dc=com"
searchguard.authentication.authorization.ldap.rolesearch: (member={0})
searchguard.authentication.authorization.ldap.userroleattribute: memberuid
searchguard.authentication.authorization.ldap.userrolename: memberOf
searchguard.authentication.authorization.ldap.rolename: cn
searchguard.authentication.authorization.ldap.resolve_nested_roles: true
searchguard.actionrequestfilter.names: ["readonly"]
searchguard.actionrequestfilter.readonly.allowed_actions: ["indices:data/read/*",
searchguard.actionrequestfilter.readonly.forbidden_actions: ["cluster:admin*", "indices:data/write*"]

My ACL rules:
{
    "acl": [
    {
        "__Comment__": "By default no filters are executed and no filters a by-passed. In such a case a exception is throws an access will be denied.",
        "filters_bypass": ,
        "filters_execute":
     },
       {
           "__Comment__": "For elastic_admin role all filters are bypassed (so none will be executed) for all indices. This means unrestricted access at all for this role.",
           "roles": ["elastic_admin"],
           "indices": ["*"],
           "filters_bypass": ["*"],
           "filters_execute":
       },
       {
           "__Comment__": "For authenticated users with role 'elastic_user' who access the index '.kibana' and 'logstash-*' for this access filter readonly will be executed.",
           "roles": ["elastic_user"],
           "indices": ["*kibana*","logstash*"],
           "filters_bypass": ,
           "filters_execute": ["*"]
       },
       {
           "__Comment__": "For authenticated user with role 'elastic_kibana' who access the index '.kibana' and 'logstash-*' for this access all filters are bypassed (so none will be executed).",
           "roles": ["elastic_kibana"],
           "indices": ["*kibana*","logstash*"],
           "filters_bypass": ["*"],
           "filters_execute":
       },
       {
           "__Comment__": "For authenticated user with role 'elastic_fluentd' who access the index 'logstash-*' for this access all filters are bypassed (so none will be executed).",
           "roles": ["elastic_fluentd"],
           "indices": ["logstash*"],
           "filters_bypass": ["*"],
           "filters_execute":
       }
  ]
}

Kibana version: 3.1.2
ElasticSearch version: 1.5.2

Thanks for any kind assistance.

--
You received this message because you are subscribed to the Google Groups "Search Guard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to search-guard+unsubscribe@googlegroups.com.
To post to this group, send email to search-guard@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/search-guard/22d7fe98-dbc9-46bc-9e36-afb45107f1fc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.