Fluentd Can not reach Elasticsearch using searchguard

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-....net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

···

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

With https I don’t get any entrie in ES, but when i try with http i get : ‘Someone (/10.131.0.1:56350) speaks http plaintext instead of ssl, will close the channel’ that’s seem logic i am not alow to comunicate with ES in http

Buy using curl I got :

  curl -XGET "https://xxxxxx" -k -u fluentd:changeme
{
"name" : "monitoring-elasticsearch-sg",
"cluster_name" : "monitoring-elasticsearch-sg-cluster",
"cluster_uuid" : "6ZTt80RaRcWYR61TcMRMyQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
···

Le mardi 23 octobre 2018 10:30:31 UTC+2, Jochen Kressin a écrit :

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

And still have the error :

2018-10-23 12:22:38 +0200 [error]: #0 [Faraday::ConnectionFailed] No route to host - connect(2) for 172.16.1.201:9200 (Errno::EHOSTUNREACH) {:host=>“xxxx”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“changeme”, :protocol=>“https”}

···

Le mardi 23 octobre 2018 12:27:37 UTC+2, sara qasmi a écrit :

With https I don’t get any entrie in ES, but when i try with http i get : ‘Someone (/10.131.0.1:56350) speaks http plaintext instead of ssl, will close the channel’ that’s seem logic i am not alow to comunicate with ES in http

Buy using curl I got :

curl -XGET "[https://xxxxxx](https://xxxxxx)  " -k -u fluentd:changeme
{
"name" : "monitoring-elasticsearch-sg",
"cluster_name" : "monitoring-elasticsearch-sg-  cluster",
"cluster_uuid" : "6ZTt80RaRcWYR61TcMRMyQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_    version" : "5.6.0",
"minimum_index_compatibility_  version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Le mardi 23 octobre 2018 10:30:31 UTC+2, Jochen Kressin a écrit :

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

In my previous post, I meant to access the _searchguard/authinfo endpoint. This will print the information about your fluentd user as JSON, including roles and permissions:

curl -XGET “https://eshost:9200/_searchguard/authinfo?pretty” -k -u fluentd:changeme

``

What is the output of this call?

···

On Tuesday, October 23, 2018 at 12:28:56 PM UTC+2, sara qasmi wrote:

And still have the error :

2018-10-23 12:22:38 +0200 [error]: #0 [Faraday::ConnectionFailed] No route to host - connect(2) for 172.16.1.201:9200 (Errno::EHOSTUNREACH) {:host=>“xxxx”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“changeme”, :protocol=>“https”}

Le mardi 23 octobre 2018 12:27:37 UTC+2, sara qasmi a écrit :

With https I don’t get any entrie in ES, but when i try with http i get : ‘Someone (/10.131.0.1:56350) speaks http plaintext instead of ssl, will close the channel’ that’s seem logic i am not alow to comunicate with ES in http

Buy using curl I got :

curl -XGET "[https://xxxxxx](https://xxxxxx)  " -k -u fluentd:changeme
{
"name" : "monitoring-elasticsearch-sg",
"cluster_name" : "monitoring-elasticsearch-sg-  cluster",
"cluster_uuid" : "6ZTt80RaRcWYR61TcMRMyQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_    version" : "5.6.0",
"minimum_index_compatibility_  version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Le mardi 23 octobre 2018 10:30:31 UTC+2, Jochen Kressin a écrit :

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

Also, what do you see in the ES log files when fluentd tries to log events?

···

On Tuesday, October 23, 2018 at 5:40:01 PM UTC+2, Jochen Kressin wrote:

In my previous post, I meant to access the _searchguard/authinfo endpoint. This will print the information about your fluentd user as JSON, including roles and permissions:

curl -XGET “https://eshost:9200/_searchguard/authinfo?pretty” -k -u fluentd:changeme

``

What is the output of this call?

On Tuesday, October 23, 2018 at 12:28:56 PM UTC+2, sara qasmi wrote:

And still have the error :

2018-10-23 12:22:38 +0200 [error]: #0 [Faraday::ConnectionFailed] No route to host - connect(2) for 172.16.1.201:9200 (Errno::EHOSTUNREACH) {:host=>“xxxx”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“changeme”, :protocol=>“https”}

Le mardi 23 octobre 2018 12:27:37 UTC+2, sara qasmi a écrit :

With https I don’t get any entrie in ES, but when i try with http i get : ‘Someone (/10.131.0.1:56350) speaks http plaintext instead of ssl, will close the channel’ that’s seem logic i am not alow to comunicate with ES in http

Buy using curl I got :

curl -XGET "[https://xxxxxx](https://xxxxxx)  " -k -u fluentd:changeme
{
"name" : "monitoring-elasticsearch-sg",
"cluster_name" : "monitoring-elasticsearch-sg-  cluster",
"cluster_uuid" : "6ZTt80RaRcWYR61TcMRMyQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_    version" : "5.6.0",
"minimum_index_compatibility_  version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Le mardi 23 octobre 2018 10:30:31 UTC+2, Jochen Kressin a écrit :

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

Seems there are some errors in your configuration. First, the index name in your fluentd configuration is:

index_name fluent.web-01

``

But for the fluentd_user role you use:

indices:
“fluentd-”:

``

This does not match, you need to use something like:

indices:
“fluent*”:

``

Also in fluentd config you use port 80:

port 80

``

While the standard ES port is 9200.

Also, what are the permissions in the ES_INPUT and INDEX_OWNER action group?

A working role definition would be:

sg_fluentd:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS_RO
indices:
‘fluent*’:
‘*’:
- UNLIMITED

``

···

On Tuesday, October 23, 2018 at 6:04:05 PM UTC+2, Jochen Kressin wrote:

Also, what do you see in the ES log files when fluentd tries to log events?

On Tuesday, October 23, 2018 at 5:40:01 PM UTC+2, Jochen Kressin wrote:

In my previous post, I meant to access the _searchguard/authinfo endpoint. This will print the information about your fluentd user as JSON, including roles and permissions:

curl -XGET “https://eshost:9200/_searchguard/authinfo?pretty” -k -u fluentd:changeme

``

What is the output of this call?

On Tuesday, October 23, 2018 at 12:28:56 PM UTC+2, sara qasmi wrote:

And still have the error :

2018-10-23 12:22:38 +0200 [error]: #0 [Faraday::ConnectionFailed] No route to host - connect(2) for 172.16.1.201:9200 (Errno::EHOSTUNREACH) {:host=>“xxxx”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“changeme”, :protocol=>“https”}

Le mardi 23 octobre 2018 12:27:37 UTC+2, sara qasmi a écrit :

With https I don’t get any entrie in ES, but when i try with http i get : ‘Someone (/10.131.0.1:56350) speaks http plaintext instead of ssl, will close the channel’ that’s seem logic i am not alow to comunicate with ES in http

Buy using curl I got :

curl -XGET "[https://xxxxxx](https://xxxxxx)  " -k -u fluentd:changeme
{
"name" : "monitoring-elasticsearch-sg",
"cluster_name" : "monitoring-elasticsearch-sg-  cluster",
"cluster_uuid" : "6ZTt80RaRcWYR61TcMRMyQ",
"version" : {
"number" : "6.2.2",
"build_hash" : "10b1edd",
"build_date" : "2018-02-16T19:01:30.685723Z",
"build_snapshot" : false,
"lucene_version" : "7.2.1",
"minimum_wire_compatibility_    version" : "5.6.0",
"minimum_index_compatibility_  version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}

Le mardi 23 octobre 2018 10:30:31 UTC+2, Jochen Kressin a écrit :

Hi,

when fluentd is trying to connect to Elasticsearch, do you see any entrie in the ES log files?

Can you also please try to visit the _searchguard/authinfo endpoint, log in with the fluentd username and password, and post the results here?

On Tuesday, October 23, 2018 at 10:03:31 AM UTC+2, sara qasmi wrote:

Problem

I try to redirect traffic from fluentd to elasticsearch. My cluster elasticsearch use searchguard, so in fluentd conf I use :

 <match apache.access>
@type elasticsearch
host monitoring-elasticsearch-sg-.. ..net
scheme https
ssl_verify false
user fluentd
password changeme
port 80
index_name fluent.web-01
type_name _doc #For ES 7
include_timestamp true
utc_index true
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
<buffer>
flush_interval 1s
buffer_chunk_limit 1M
buffer_queue_limit 512
</buffer>
</match>

But i have this error message :

2018-10-22 14:20:16 +0200 [warn]: #0 failed to flush the buffer. retry_time=10 next_retry_seconds=2018-10-22 14:20:15 +0200 chunk=“578d009d721014c4d22a5b76f701c617” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>“elasticsearch”, :port=>9200, :scheme=>“https”, :user=>“fluentd”, :password=>“obfuscated”})!”

in sg_roles.yml :
fluentd_user:
cluster:

  • ES_INPUT
    indices:
    fluentd-”:
    “*”:
  • INDEX_OWNER

Any help please ?

I use default user admin, still not work.

@type elasticsearch

host 127.0.0.1

port 9200

logstash_format true

logstash_prefix newf1

logstash_dateformat %Y%m%d

include_tag_key true

type_name access_log

tag_key @log_name

flush_interval 1s

scheme https

ssl_verify false

user admin

password admin

2018-11-05 10:15:09 +0000 [warn]: #0 failed to flush the buffer. retry_time=9 next_retry_seconds=2018-11-05 10:15:08 +0000 chunk=“579e8094725ed8c8ef35e891a1d10f95” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"https", :user=>"admin", :password=>"obfuscated"})!”

Please set the SG debug level to debug and then post the ES logs here. In the fluentd logs you can’t see what the underlying problem is.

Here’s a working configuration:

fluentd configuration:

@type elasticsearch host sgssl-0.example.com scheme https ssl_verify false user fluentd password admin port 9200 index_name fluent.web-01 type_name _doc #For ES 7 include_timestamp true utc_index true flush_interval 1s buffer_chunk_limit 1M buffer_queue_limit 512 flush_interval 1s buffer_chunk_limit 1M buffer_queue_limit 512

``

Corresponding SG role:

sg_fluentd:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS_RO
indices:
‘fluent*’:
‘*’:
- UNLIMITED

``

···

On Monday, November 5, 2018 at 11:35:08 AM UTC+1, johnzhengaz@gmail.com wrote:

I use default user admin, still not work.

@type elasticsearch

host 127.0.0.1

port 9200

logstash_format true

logstash_prefix newf1

logstash_dateformat %Y%m%d

include_tag_key true

type_name access_log

tag_key @log_name

flush_interval 1s

scheme https

ssl_verify false

user admin

password admin

2018-11-05 10:15:09 +0000 [warn]: #0 failed to flush the buffer. retry_time=9 next_retry_seconds=2018-11-05 10:15:08 +0000 chunk=“579e8094725ed8c8ef35e891a1d10f95” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"https", :user=>"admin", :password=>"obfuscated"})!”

Here is logs of search guard:


[2018-11-06T07:30:19,682][ERROR][c.f.s.h.SearchGuardHttpServerTransport] [EPY-RFh] SSL Problem Client requested protocol TLSv1 not enabled or not supported

javax.net.ssl.SSLHandshakeException: Client requested protocol TLSv1 not enabled or not supported

at sun.security.ssl.Handshaker.checkThrown(Handshaker.java:1529) ~[?:?]

at sun.security.ssl.SSLEngineImpl.checkTaskThrown(SSLEngineImpl.java:535) ~[?:?]

at sun.security.ssl.SSLEngineImpl.readNetRecord(SSLEngineImpl.java:813) ~[?:?]

at sun.security.ssl.SSLEngineImpl.unwrap(SSLEngineImpl.java:781) ~[?:?]

at javax.net.ssl.SSLEngine.unwrap(SSLEngine.java:624) ~[?:1.8.0_181]

at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap(SslHandler.java:281) ~[netty-handler-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1215) ~[netty-handler-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.ssl.SslHandler.decodeJdkCompatible(SslHandler.java:1127) ~[netty-handler-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.ssl.SslHandler.decode(SslHandler.java:1162) ~[netty-handler-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]

at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) ~[netty-codec-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1359) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:935) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:645) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:545) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:499) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:459) [netty-transport-4.1.16.Final.jar:4.1.16.Final]

at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858) [netty-common-4.1.16.Final.jar:4.1.16.Final]

at java.lang.Thread.run(Thread.java:748) [?:1.8.0_181]

Caused by: javax.net.ssl.SSLHandshakeException: Client requested protocol TLSv1 not enabled or not supported

at sun.security.ssl.Alerts.getSSLException(Alerts.java:192) ~[?:?]

at sun.security.ssl.SSLEngineImpl.fatal(SSLEngineImpl.java:1666) ~[?:?]

at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:330) ~[?:?]

at sun.security.ssl.Handshaker.fatalSE(Handshaker.java:318) ~[?:?]

at sun.security.ssl.ServerHandshaker.clientHello(ServerHandshaker.java:548) ~[?:?]

at sun.security.ssl.ServerHandshaker.processMessage(ServerHandshaker.java:228) ~[?:?]

at sun.security.ssl.Handshaker.processLoop(Handshaker.java:1052) ~[?:?]

at sun.security.ssl.Handshaker$1.run(Handshaker.java:992) ~[?:?]

at sun.security.ssl.Handshaker$1.run(Handshaker.java:989) ~[?:?]

at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_181]

at sun.security.ssl.Handshaker$DelegatedTask.run(Handshaker.java:1467) ~[?:?]

at io.netty.handler.ssl.SslHandler.runDelegatedTasks(SslHandler.java:1364) ~[?:?]

at io.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1272) ~[?:?]

... 19 more

Note that kibana can works well with search guard.

Here is my elasticsearch.yaml
searchguard.ssl.transport.pemcert_filepath: esnode.pem

searchguard.ssl.transport.pemkey_filepath: esnode-key.pem

searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem

searchguard.ssl.transport.enforce_hostname_verification: false

searchguard.ssl.http.enabled: true

searchguard.ssl.http.pemcert_filepath: esnode.pem

searchguard.ssl.http.pemkey_filepath: esnode-key.pem

searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem

searchguard.allow_unsafe_democertificates: true

searchguard.allow_default_init_sgindex: true

searchguard.authcz.admin_dn:

  • CN=kirk,OU=client,O=client,L=test, C=de

searchguard.audit.type: internal_elasticsearch

searchguard.enable_snapshot_restore_privilege: true

searchguard.check_snapshot_restore_write_privileges: true

searchguard.restapi.roles_enabled: [“sg_all_access”]

cluster.routing.allocation.disk.threshold_enabled: false

cluster.name: searchguard_demo

network.host: 0.0.0.0

discovery.zen.minimum_master_nodes: 1

node.max_local_storage_nodes: 3

Thanks Jochen Kressin for debugging suggestion.
I found the root reason: My fluentd are trying TLSv1 which may not support by search guard.

I found the solution:

Add “ssl_version TLSv1_2” in /etc/fluent/fluent.conf.

‘’’

scheme https

ssl_verify false

user admin

password admin

ssl_version TLSv1_2

‘’’

Not it works completely!

problem solved: I needed an https route to my ES service
Thank you for your help

···

Le lundi 5 novembre 2018 11:42:26 UTC+1, Jochen Kressin a écrit :

Please set the SG debug level to debug and then post the ES logs here. In the fluentd logs you can’t see what the underlying problem is.

https://docs.search-guard.com/latest/troubleshooting-setting-log-level

Here’s a working configuration:

fluentd configuration:

@type elasticsearch host [sgssl-0.example.com](http://sgssl-0.example.com) scheme https ssl_verify false user fluentd password admin port 9200 index_name fluent.web-01 type_name _doc #For ES 7 include_timestamp true utc_index true flush_interval 1s buffer_chunk_limit 1M buffer_queue_limit 512 flush_interval 1s buffer_chunk_limit 1M buffer_queue_limit 512

``

Corresponding SG role:

sg_fluentd:
cluster:
- CLUSTER_MONITOR
- CLUSTER_COMPOSITE_OPS_RO
indices:
‘fluent*’:
‘*’:
- UNLIMITED

``

On Monday, November 5, 2018 at 11:35:08 AM UTC+1, johnz...@gmail.com wrote:

I use default user admin, still not work.

@type elasticsearch

host 127.0.0.1

port 9200

logstash_format true

logstash_prefix newf1

logstash_dateformat %Y%m%d

include_tag_key true

type_name access_log

tag_key @log_name

flush_interval 1s

scheme https

ssl_verify false

user admin

password admin

2018-11-05 10:15:09 +0000 [warn]: #0 failed to flush the buffer. retry_time=9 next_retry_seconds=2018-11-05 10:15:08 +0000 chunk=“579e8094725ed8c8ef35e891a1d10f95” error_class=Fluent::Plugin::ElasticsearchOutput::ConnectionFailure error=“Can not reach Elasticsearch cluster ({:host=>"127.0.0.1", :port=>9200, :scheme=>"https", :user=>"admin", :password=>"obfuscated"})!”