How to curl elasticsearch without insecure flag

SG Version: 5:5.6.15-19.3

In the examples and the docs in searchguard I see that we test the installation using
curl --insecure -u admin:admin 'https://localhost:9200/_searchguard/authinfo?pretty'

I was wondering what the process would be to ensure that I only provide the username/password and don’t use the --insecure flag. I have the root/client/admin certs on my elasticsearch node but I don’t know how they come into play. I know that the admin cert is used to make changes to the SG index but I would like to know how to use the client/root keys/certs. How do I send requests to the elasticsearch API with both the username/password and passing a cert. Thanks in advance!

This is pretty straight forward. The reason for the --insecure/-k flag is that you are using a self-signed certificate which curl is unable to validate, since it does not know anything about the root CA that was used to sign it.

You can use the following curl switch to configure the root CA:

--cacert /path/to/root-ca.pem

1 Like

Thanks for the response! So what exactly is the purpose of the node/client cert if we are supposed to be using the root certs when accessing elasticsearch? Since the node cert is signed by the root cert is it possible to use that instead?

It’s the other way round:

When you connect with curl to the HTTP(S) port of Elasticsearch, A TLS handshake is initiated. This basically means that Elasticsearch will send the configured TLS certificate to curl. Now curl needs to validate this certificate. It does so by checking if the certificate was signed by any known and trusted root CA. Since the root CA you are using is self-generated, this will fail, unless you make the root CA known to curl with the switch I posted. Curl does not send the root CA to Elasticsearch. It uses it to validate the cert it receives from Elasticsearch.

Just think about it as if you would connect with a browser to a webserver. The webserver (Elasticsearch) sends it’s TLS certificate, and the browser (curl) validates it against the installed known and trusted root CAs.

So in that sense the certificate is there to validate the certificate and encrypt the transmissions. It has nothing to do with authentication and authorization. That’s where the HTTP Basic Auth header comes into play.

1 Like

That makes sense thanks! So if I wanted another application to communicate with elasticsearch with SG enabled, I would need to have the root-ca.pem file in that cert and also have it added to that applications keystore since it would be sending HTTP requests and I would not be able to pass the cert in through a flag like a curl?

keytool -keystore ${MY_JRE_KEYSTORE} -storepass password -importcert -alias elasticsearch_cert -file /path/to/root-ca.pem -noprompt

Also for some reason when I try and curl by passing the root-ca pem I seem to get an error with both localhost and the elasticsearch container ip.

bash-4.4# curl --cacert config/root-ca.pem https://admin:admin@localhost:9200/_cat/health?v
curl: (60) SSL: no alternative certificate subject name matches target host name 'localhost'
More details here:

bash-4.4# curl --cacert config/root-ca.pem https://admin:admin@
curl: (60) SSL: no alternative certificate subject name matches target host name ''
More details here:

The insecure curl still works but neither of these do. I’m guessing it has to do with the way the target hostname is set by the searchguard tool or how I set up my tlsconfig to create the root cert.

      dn:,OU=CA,O=My Es,DC=myes-onprem,DC=com
      keysize: 2048
      pkPassword: none
      validityDays: 3650
      file: root-ca.pem


bash-4.4# cat elasticsearch.yml

# Uncomment the following lines for a production cluster deployment
#discovery.zen.minimum_master_nodes: 1
# This is a configuration snippet for the node esnode
# This snippet needs to be inserted into the file config/elasticsearch.yml of the respective node.
# If the config file already contains SearchGuard configuration, this needs to be replaced.
# Furthermore, you need to copy the files referenced below into the same directory.
# Please refer to for further configuration of your installation.

searchguard.ssl.transport.pemcert_filepath: esnode.pem
searchguard.ssl.transport.pemkey_filepath: esnode.key
searchguard.ssl.transport.pemtrustedcas_filepath: root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: esnode_http.pem
searchguard.ssl.http.pemkey_filepath: esnode_http.key
searchguard.ssl.http.pemtrustedcas_filepath: root-ca.pem
-,OU=Ops,O=My Es,DC=myes-onprem,DC=com
  - "TLSv1.2"

The error you are seeing is because your node certificate does not contain a valid hostname or IP address. It seems when omitting the --insecure flag, curl validates the certificate against the root CA (this seems to work in your case) and then validates the hostname in the certificate.

Both the hostname(s) and the IP address(es) can be specified in the SAN (Subject Alternative Names) section of your node certificate. If you use the TLS tool, you can specify then like:

  - name: node1
    dn:,OU=Ops,O=Example Com\, Inc.,DC=example,DC=com
      - "localhost"
      - "*"
      - ""
      - ""

Note that using IP addresses is usually discouraged, and working with hostnames is preferred. For the hostnames you can use wildcards, for the IP section you can’t according to the spec

Thanks, I was able to resolve the issue by adding localhost/the elasticsearch service name to the tls config. For the elasticsearch nodes I’m able to generate the node certs through my docker entrypoint. But what should I do for other pods/containers that want to access elasticsearch? Would I need to generate a seperate cert for them or would it be fine if I just passed the root cert to each pod and then did

curl --cacert ./config/root-ca.pem https://admin:admin@service-elasticsearch:9200/_searchguard/authinfo?pretty

This seems to work but I just wanted to confirm that it was the right approach

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.