Tls-tool-1.8 - updating certs

**Elasticsearch version: 6.6.1

** Search-guard version: Search Guard Version: 6.6.1-24.1

**Server OS version: Ubuntu 18.04

**Kibana version (if relevant): 6.6.1

** Logstash version: 6.6.1-1

Describe the issue:
I had a cluster of 5 nodes that was working. The security certificates expired (they were from years ago and a previous installation) and we don’t have access to the previous signing authority so I wanted to replace them with the most recent process.

I used the offline tls tool verison 1.8
I made node certificates (separate http ones) a new signing ca and some users
I shut down all the nodes.
I copied and pasted the snippet given by the tool into the elasticsearch.yml (replacing the previous “truststore.jks” entries from the tls tool like 4 years ago I think)

All nodes were changed over at the same time and there are no other nodes.

I restarted the nodes and I think elasticsearch has come up. I could run sgadmin and the nodes found a master node.

However, I have a couple of problems.

  1. The elasticsearch logs constantly spam:
    " SSL Problem Received fatal alert: unknown_ca Received fatal alert: unknown_ca"
    (log entry at the bottom)
    It still spams this even if I turn off all clients (kibana, cerebro, logstash, and 2 command line scripts we have)

  2. I don’t know how to get logstash to connect. The documentation from here:
    Uses the “truststore.jks” that the previous tool created. the tls-tool doesn’t make this file anymore and I haven’t been able to find the right config.

  3. I think elasticsearch is running as I could run sgadmin against it (but the index is red), but I can’t get kibana, logstash, cerebro or the scripts to connect to it.

Steps to reproduce:

I can’t go back the previous certificates expired.

Expected behavior:

Provide configuration:

#disable x-pack (it's built in to 6.4.2 false

searchguard.enterprise_modules_enabled: true
searchguard.restapi.roles_enabled: ["sg_all_access"]

searchguard.ssl.transport.pemcert_filepath: /etc/elasticsearch/node1.pem
searchguard.ssl.transport.pemkey_filepath: /etc/elasticsearch/node1.key
searchguard.ssl.transport.pemkey_password: searchguard.ssl.transport.pemtrustedcas_filepath: /etc/elasticsearch/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: /etc/elasticsearch/node1_http.pem
searchguard.ssl.http.pemkey_filepath: /etc/elasticsearch/node1_http.key
searchguard.ssl.http.pemtrustedcas_filepath: /etc/elasticsearch/root-ca.pem
<admin user - sgadmin connects with it>

Provide logs:

][ERROR][c.f.s.s.h.n.SearchGuardSSLNettyHttpServerTransport] [log1-op] SSL Problem Received fatal alert: unknown_ca Received fatal alert: unknown_ca
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:?]
        at ~[?:1.8.0_265]
        at io.netty.handler.ssl.SslHandler$SslEngineType$3.unwrap( ~[netty-handler-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.ssl.SslHandler.unwrap( ~[netty-handler-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.ssl.SslHandler.decodeJdkCompatible( ~[netty-handler-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.ssl.SslHandler.decode( ~[netty-handler-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection( ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.callDecode( ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed( ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInputClosed( ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.codec.ByteToMessageDecoder.channelInactive( ~[netty-codec-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.handler.ssl.SslHandler.channelInactive( ~[netty-handler-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at$HeadContext.channelInactive( [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at$AbstractUnsafe$ [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.concurrent.AbstractEventExecutor.safeExecute( [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks( [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at [netty-transport-4.1.32.Final.jar:4.1.32.Final]
        at io.netty.util.concurrent.SingleThreadEventExecutor$ [netty-common-4.1.32.Final.jar:4.1.32.Final]
        at [?:1.8.0_265]


WARNING: JAVA_HOME not set, will use /usr/bin/java
Search Guard Admin v6
Will connect to <node1>:<port> ... done
Elasticsearch Version: 6.6.1
Search Guard Version: 6.6.1-24.1
Connected as <admin user>
Diagnostic trace written to: <filename>
Contacting elasticsearch cluster 'elasticsearch' ...
Clustername: <clustername>
Clusterstate: RED
Number of nodes: 5
Number of data nodes: 4
searchguard index already exists, so we do not need to create one.
ERR: searchguard index state is RED.
Populate config from <config location>
Will update 'sg/config' with <location>/sgconfig/sg_config.yml
FAIL: Configuration for 'config' failed because of UnavailableShardsException[[searchguard][0] primary shard is not active Timeout: [1m], request: [BulkShardRequest [[searchguard][0]] containing [index {[searchguard][sg][config], source[n/a, actual length: [5.1kb], max length: 2kb]}] and a refresh]]

First, the TLS certificates and keys must be in /etc/elasticsearch/config folder

I don’t see the values of searchguard.nodes_dn and searchguard.authcz.admin_dn, make sure you have the correct values there, for example

Check each node TLS certificate to make sure the IP address indicated in the certificate matches the IP address of the network interface that is used to communicate in the cluster. You may have problems if you have multiple interfaces or run the dual-stack IPv4 + IPv6. You can view the certificate content using the openssl tool, for example openssl x509 -text -in search-guard-tlstool/out/node2.pem.

If you still have problems, look at the TLS troubleshooting guide

Are you sure the certs have to go in /etc/elasticsearch/config

The nodes start up, find the master and I can connect with sgadmin.
As I speak the cluster has gone to yellowstate as well, so the sgadmin actuallys runs to completion now as well.
The unknown_ca error still spams in the logs though.
I followed everything here:

Exactly as written. And followed the blog listed about making production level certificates.

The values in nodes_dn and admin_dn are correct (or at least as copied from the snippet created by the tool. I just removed them to sanitize the information.

The IP addresses do match properly. I used the tool provided with tls-tool-1.8 as well.
node1 has several other interfaces but they shouldn’t be bound to elasticsearch.

As I said, elasticsearch seems to be running and sgadmin can connect. I just can’t get other clients (logstash, kibana, etc) to connect. And the documentation for search-guard doesn’t seem to match what the tls-tool provides as certificates.

Are you sure the certs have to go in /etc/elasticsearch/config

Yes, I’m sure. In the doc I see this

searchguard.ssl.transport.pemkey_filepath Path to the certificates key file (PKCS #8), which must be under the config/ directory, specified using a relative path (mandatory)

searchguard.ssl.http.pemcert_filepath Path to the X.509 node certificate chain (PEM format), which must be under the config/ directory, specified using a relative path (mandatory)

Plus, you don’t have to provide the full path to a TLS certificate, for example

searchguard.ssl.transport.pemcert_filepath: node1.pem
searchguard.ssl.transport.pemkey_filepath: node1.key
searchguard.ssl.transport.pemkey_password: PCXTEFqgjDF5
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: node1_http.pem
searchguard.ssl.http.pemkey_filepath: node1_http.key
searchguard.ssl.http.pemkey_password: krD946iuSi5G
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
-,OU=Ops,O=Example Com\, Inc.,DC=example,DC=com
-,OU=Ops,O=Example Com\, Inc.,DC=example,DC=com
-,OU=Ops,O=Example Com\, Inc.,DC=example,DC=com
-,OU=Ops,O=Example Com\, Inc.,DC=example,DC=com

I am fairly positive, that by saying config/ directory it doesn’t literally mean that they should be in /etc/elasticsearch/config
But that they should be under the elasticsearch config folder, which in this case is “/etc/elasticsearch” because yes if I try to put them somewhere else, it doesn’t work.

That being said for the sake of getting this to work I have complied with this restriction and redone the certificates nothing has changed.
I tried generating new certificates with the tool and then copying and pasting the snippet again to all the nodes. It is behaving the same. Actually, the nodes except for node1 are now generating unknown_certificate errors in addition to the unknown_ca errors. I don’t know.

So I deleted all the data and certificates started from vanilla elasticsearch.yml again, made all them master nodes, identical configs except for names, regenerated certificates and started it up.

Same thing (as this morning before I even posted).
The nodes start up fine (as seen in the logs) choose a master (node 2 in this case right now).
sgadmin runs and applies our sg config.
But I can’t connect via kibana, logstash, or anything else. Cerebro won’t connect
And the logs still have the errors as I mentioned.
If there is anything else I can do to help you help me troubleshoot I will do my best.


Hi. I created a three-node cluster to reproduce your issue and saw no problems. Look at the settings, instructions, and files below and compare them with the things you have. Ignore *.example files.

Node 0

Node 1

Node 2

You can see what files copied to what location in this Dockerfile

You can see how I created the TLS certificates and keys in this bash script

If you want to run my Docker lab on your machine, run the Docker app and then execute the following instructions:

git clone
cd search-guard-labs
git checkout forum/1999_updating_tls_certs_sg_v6
docker-compose -f docker-compose-basicauth.yml up

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.