The error is here:
Aug 15 12:57:11 ELKD01 logstash[4638]: [2018-08-15T12:57:11,422][WARN ][logstash.outputs.elasticsearch] Error while performing sniffing {:error_message=>“Host name ‘172.31.53.24’ does not match the certificate subject provided by the peer (CN=node-0.example.com, OU=node, O=node, L=test, DC=de)”, :class=>"Manticore::
Host name ‘172.31.53.24’ does not match the certificate subject provided by the peer (CN=node-0.example.com, OU=node, O=node, L=test, DC=de)"
This means that host name of your Elasticsearch node does not match the hostname in the certificate. This is called hostname verification and is part of TLS.
Certificate validation and hostname verification are a constant issue with logstash. There are several tickets around this, some dating back to 2015, e.g.:
https://github.com/logstash-plugins/logstash-output-http/issues/21
“During SSL certification validation, logstash is doing a strict host name check.”
So if you want to validate the certificate against the root CA (which is what you should do) it seems logstash always performs hostname verification.
That leaves you with the following options:
-
Disable certificate validation (not recommended)
-
Add the hostname and/or the IP of your Elasticsearch node to the certificate
-
From the CN (node-0.example.com) I deduct you are using the sample script to generate the certificates. If this is the case and ES and logstash are running on the same machine (as your configuration implies) you can also use:
hosts => [“https://127.0.0.1:9200”]
``
This would work because the demo certificates contain 127.0.0.1 in the SAN section.
- If it is a local PoC, you might also consider to add an entry to your /etc/hosts that matches the hostname in the certificate:
127.0.0.1 node-0.example.com
``
And then use this in your logstash configuration:
hosts => [“https://node-0.example.com:9200”]
``
This rather odd logstash behavior affects all plugins adding TLS on the REST layer.
···
On Wednesday, August 15, 2018 at 8:00:44 PM UTC+2, Tyler Voll wrote:
I would like more information on doing this, because even after adding my cacert, I still get the same issue and it won’t let me through…
Error I’m receiving:
Aug 15 12:57:10 ELKD01 logstash[4638]: [2018-08-15T12:57:10,324][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>“http://localhost:9200/”, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>“Elasticsearch Unreachable: [[http://localhost:9200/]Manticore::ClientProtocolException] localhost:9200 failed to respond”}
Aug 15 12:57:11 ELKD01 logstash[4638]: [2018-08-15T12:57:11,365][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[https://logstash:xxxxxx@localhost:9200/], :added=>[https://logstash:xxxxxx@172.31.53.24:9200/]}}
Aug 15 12:57:11 ELKD01 logstash[4638]: [2018-08-15T12:57:11,370][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://logstash:xxxxxx@172.31.53.24:9200/, :path=>“/”}
Aug 15 12:57:11 ELKD01 logstash[4638]: [2018-08-15T12:57:11,422][WARN ][logstash.outputs.elasticsearch] Error while performing sniffing {:error_message=>“Host name ‘172.31.53.24’ does not match the certificate subject provided by the peer (CN=node-0.example.com, OU=node, O=node, L=test, DC=de)”, :class=>“Manticore::UnknownException”, :backtrace=>[“/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:37:in `block in initialize’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/manticore-0.6.4-java/lib/manticore/response.rb:79:in `call’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/manticore_adapter.rb:74:in `perform_request’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:291:in `perform_request_to_url’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:245:in `block in healthcheck!'”, “org/jruby/RubyHash.java:1343:in `each’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:241:in `healthcheck!'”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:341:in `update_urls’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:157:in `sniff!'”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:146:in `block in start_sniffer’”, “/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-output-elasticsearch-9.2.0-java/lib/logstash/outputs/elasticsearch/http_client/pool.rb:128:in `until_stopped’”, "/usr/share/logstash/vendor/bundle/jrub
My Logstash Output:
output {
elasticsearch {
user => logstash
password => logstash
ssl => true
ssl_certificate_verification => true
cacert => ‘/etc/logstash/root-ca.pem’
hosts => [“https://localhost:9200”]
sniffing => true
manage_template => false
index => “%{[@metadata][beat]}-%{+YYYY.MM.dd}”
document_type => “%{[@metadata][type]}”
}
}
This was done using the demo as well.
On Tuesday, January 9, 2018 at 10:27:59 AM UTC-6, Search Guard wrote:
see http://docs.search-guard.com/latest/logstash#using-logstash-with-search-guard
seems you miss either truststore or cacert in the logstash config.
On Tuesday, 9 January 2018 17:21:49 UTC+1, Chadwick Wendling wrote:
Hello,
I am new to Search Guard. I have completed the Demo installation with Elasticsearch plugin. The installation placed the certificate. Search Guard is up and running and accessible through Kibana. The error in the log is show below.
[2018-01-09T09:52:32,620][INFO ][logstash.outputs.elasticsearch] Running health check to see if an Elasticsearch connection is working {:healthcheck_url=>https://logstash:xxxxxx@10.114.11.12:9200/, :path=>“/”}
[2018-01-09T09:52:32,654][WARN ][logstash.outputs.elasticsearch] Attempted to resurrect connection to dead ES instance, but got an error. {:url=>“https://[username]:[password]@[ES-IP]:9200/”, :error_type=>LogStash::Outputs::ElasticSearch::HttpClient::Pool::HostUnreachableError, :error=>“Elasticsearch Unreachable: [https://[username]:[password]@[ES-IP]:9200/][Manticore::ClientProtocolException] PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target”}
Elasticsearch version: 6.1.1
Search Guard Demo version
Audit Logging
6.1.1-20.0
true
Document- and Field-Level Security
6.0-20.0
true
HTTP Basic Authenticator
6.1.1-20.1
false
Internal users authorization backend
6.1.1-20.1
false
Kibana Multitenancy
6.0-20.0
true
REST Management API
6.0-20.0
true
Any assistance would be appreciated. Please let me know if I need to provide more information.