Environment:
- CentOS Linux release 7.6.1810 (Core)
- Elasticsearch: elasticsearch-oss:6.7.1 (docker image)
- Filebeat: filebeat-6.7.1-x86_64.rpm
- Search Guard: 6.7.1-24.3
I am running the OSS flavor of ES as a Docker image (no xpack installed). The Search Guard plugin has been installed successfully and I am using the example sgconfig from the search-guard-6 project. The config has been successfully loaded using the sgamin.sh script. I’ve also installed Filebeat from the RPM and configured it to use SSL and the ‘logstash’ user/password from the sgconfig that is loaded.
Filebeat pings Elasticsearch successfully, however, it fails when checking Elasticsearch license information:
2019-04-09T17:25:30.027Z DEBUG [elasticsearch] elasticsearch/client.go:715 ES Ping(url=https://xxxxxxxxx:9200)
2019-04-09T17:25:30.103Z DEBUG [elasticsearch] elasticsearch/client.go:738 Ping status code: 200
2019-04-09T17:25:30.103Z INFO elasticsearch/client.go:739 Attempting to connect to Elasticsearch version 6.7.1
2019-04-09T17:25:30.103Z DEBUG [elasticsearch] elasticsearch/client.go:757 GET https://xxxxxxxxx:9200/_xpack?human=false <nil>
2019-04-09T17:25:31.830Z ERROR pipeline/output.go:100 Failed to connect to backoff(elasticsearch(https://xxxxxxxxx:9200)): Connection marked as failed because the onConnect callback failed: cannot retrieve the elasticsearch license: error from server, response code: 500
This error appears to keep any of the bulk From what I can tell is that Filebeat indeed checks for licensing and falls back to the OSS license if not found. Verified the same behavior using curl as the ‘logstash’ user:
# curl -kv -XGET -u logstash:logstash "https://localhost:9200/_xpack?human=false&include_type_name=true"
* About to connect() to localhost port 9200 (#0)
* Trying ::1...
* Connected to localhost (::1) port 9200 (#0)
* Initializing NSS with certpath: sql:/etc/pki/nssdb
* skipping SSL peer certificate verification
* NSS: client certificate not found (nickname not specified)
* SSL connection using TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
* Server certificate:
* subject: CN= [es.log|attachment](upload://4djPaquhfR3g2S0tcn72UjJinIy.log) (38.9 KB)
,OU= xxxxxxxxx,O= xxxxxxxxx,L= xxxxxxxxx,ST= xxxxxxxxx,C=US
* start date: Apr 09 14:52:53 2019 GMT
* expire date: Apr 06 14:52:53 2029 GMT
* common name: xxxxxxxxx
* issuer: CN= xxxxxxxxx,OU=n xxxxxxxxx BA,O= xxxxxxxxx,L= xxxxxxxxx,ST= xxxxxxxxx,C=US
* Server auth using Basic with user 'logstash'
> GET /_xpack?human=false&include_type_name=true HTTP/1.1
> Authorization: Basic bG9nc3Rhc2g6bG9nc3Rhc2g=
> User-Agent: curl/7.29.0
> Host: localhost:9200
> Accept: */*
>
< HTTP/1.1 500 Internal Server Error
< content-type: application/json; charset=UTF-8
< content-length: 197
<
* Connection #0 to host localhost left intact
{"error":{"root_cause":[{"type":"security_exception","reason":"Unexpected exception indices:admin/get"}],"type":"security_exception","reason":"Unexpected exception indices:admin/get"},"status":500}
After seeing the response above, I added ‘indices:admin/get’ for the _xpack index the logstash role, but, I get the same errors with that configuration as well.
ES log shows:
2019-04-09 17:24:30.453Z ERROR [elasticsearch[esnode-xxxxxxxxx][http_server_worker][T#2]] com.floragunn.searchguard.filter.SearchGuardFilter - Unexpected exception [_xpack] InvalidIndexNameException[Invalid index name [xpack], must not start with '‘.]
org.elasticsearch.indices.InvalidIndexNameException: Invalid index name [xpack], must not start with '’.
…
2019-04-09 17:24:30.463Z WARN [elasticsearch[esnode-xxxxxxxxx][http_server_worker][T#2]] rest.suppressed - path: /_xpack, params: {index=_xpack, human=false}
org.elasticsearch.ElasticsearchSecurityException: Unexpected exception indices:admin/get
Log is attached. Any ideas on what may be causing this and how to resolve the issue?
Thanks,
James
es.log (38.9 KB)