Elasticsearch Nodes behind nginx, to use with Logstash

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}

# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;

proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    

proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;

# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;

# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;

return 403;

}
}

``

Is this even possible?

Worth noting, I have allowed this:

···

On Friday, 16 February 2018 10:59:09 UTC, anthony...@actual-experience.com wrote:

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}


# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;


proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    


proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;


# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;


# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;


return 403;

}
}

``

Is this even possible?

That depends on your TLS settings actually. In your case, the TLS session is terminated on nginx, and then it is load balanced to one of your configured upstream servers in plain HTTP. That’s why you see the “plaintext connection” message.

You have two options:

  1. Terminate TLS on nginx, disable HTTPS in SG

In this scenario you can use nginx as normal, use URI based load balancing and add HTTP headers. However, traffic between nginx and ES/SG is unencrypted in this case, which makes it less secure.

  1. Use TLS pass through

In this scenario TLS is not terminated by nginx and passed through as-is to the upstream servers. However, here you are limited to TCP based load balancing because without terminating the TLS session nginx cannot access URIs or the HTTP headers.

···

On Friday, February 16, 2018 at 12:02:45 PM UTC+1, anthony.cleaves@actual-experience.com wrote:

Worth noting, I have allowed this:

http://docs.search-guard.com/latest/search-guard-proxy

On Friday, 16 February 2018 10:59:09 UTC, anthony...@actual-experience.com wrote:

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}


# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;


proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    


proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;


# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;


# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;


return 403;

}
}

``

Is this even possible?

TLS pass through seems sane, I will take a look is there any docs for this?

···

On Friday, 16 February 2018 11:35:11 UTC, Jochen Kressin wrote:

That depends on your TLS settings actually. In your case, the TLS session is terminated on nginx, and then it is load balanced to one of your configured upstream servers in plain HTTP. That’s why you see the “plaintext connection” message.

You have two options:

  1. Terminate TLS on nginx, disable HTTPS in SG

In this scenario you can use nginx as normal, use URI based load balancing and add HTTP headers. However, traffic between nginx and ES/SG is unencrypted in this case, which makes it less secure.

  1. Use TLS pass through

In this scenario TLS is not terminated by nginx and passed through as-is to the upstream servers. However, here you are limited to TCP based load balancing because without terminating the TLS session nginx cannot access URIs or the HTTP headers.

On Friday, February 16, 2018 at 12:02:45 PM UTC+1, anthony...@actual-experience.com wrote:

Worth noting, I have allowed this:

http://docs.search-guard.com/latest/search-guard-proxy

On Friday, 16 February 2018 10:59:09 UTC, anthony...@actual-experience.com wrote:

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}


# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;


proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    


proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;


# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;


# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;


return 403;

}
}

``

Is this even possible?

Well, not from our side since setup differs from proxy to proxy. We’re more into haproxy than nginx :wink: but here’s what I found:

https://www.cyberciti.biz/faq/configure-nginx-ssltls-passthru-with-tcp-load-balancing/

You’re looking for TCP load balancing:

···

On Friday, February 16, 2018 at 12:38:42 PM UTC+1, anthony.cleaves@actual-experience.com wrote:

TLS pass through seems sane, I will take a look is there any docs for this?

On Friday, 16 February 2018 11:35:11 UTC, Jochen Kressin wrote:

That depends on your TLS settings actually. In your case, the TLS session is terminated on nginx, and then it is load balanced to one of your configured upstream servers in plain HTTP. That’s why you see the “plaintext connection” message.

You have two options:

  1. Terminate TLS on nginx, disable HTTPS in SG

In this scenario you can use nginx as normal, use URI based load balancing and add HTTP headers. However, traffic between nginx and ES/SG is unencrypted in this case, which makes it less secure.

  1. Use TLS pass through

In this scenario TLS is not terminated by nginx and passed through as-is to the upstream servers. However, here you are limited to TCP based load balancing because without terminating the TLS session nginx cannot access URIs or the HTTP headers.

On Friday, February 16, 2018 at 12:02:45 PM UTC+1, anthony...@actual-experience.com wrote:

Worth noting, I have allowed this:

http://docs.search-guard.com/latest/search-guard-proxy

On Friday, 16 February 2018 10:59:09 UTC, anthony...@actual-experience.com wrote:

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}


# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;


proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    


proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;


# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;


# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;


return 403;

}
}

``

Is this even possible?

Working as expected, thank you.

···

On Friday, 16 February 2018 11:50:28 UTC, Jochen Kressin wrote:

Well, not from our side since setup differs from proxy to proxy. We’re more into haproxy than nginx :wink: but here’s what I found:

https://www.cyberciti.biz/faq/configure-nginx-ssltls-passthru-with-tcp-load-balancing/

You’re looking for TCP load balancing:

https://www.nginx.com/resources/admin-guide/tcp-load-balancing/

On Friday, February 16, 2018 at 12:38:42 PM UTC+1, anthony...@actual-experience.com wrote:

TLS pass through seems sane, I will take a look is there any docs for this?

On Friday, 16 February 2018 11:35:11 UTC, Jochen Kressin wrote:

That depends on your TLS settings actually. In your case, the TLS session is terminated on nginx, and then it is load balanced to one of your configured upstream servers in plain HTTP. That’s why you see the “plaintext connection” message.

You have two options:

  1. Terminate TLS on nginx, disable HTTPS in SG

In this scenario you can use nginx as normal, use URI based load balancing and add HTTP headers. However, traffic between nginx and ES/SG is unencrypted in this case, which makes it less secure.

  1. Use TLS pass through

In this scenario TLS is not terminated by nginx and passed through as-is to the upstream servers. However, here you are limited to TCP based load balancing because without terminating the TLS session nginx cannot access URIs or the HTTP headers.

On Friday, February 16, 2018 at 12:02:45 PM UTC+1, anthony...@actual-experience.com wrote:

Worth noting, I have allowed this:

http://docs.search-guard.com/latest/search-guard-proxy

On Friday, 16 February 2018 10:59:09 UTC, anthony...@actual-experience.com wrote:

Elasticsearch 6.1.3, SG 6.1.3-20

Is it possible to put the elasticsearch nodes behind Nginx in order for logstash to talk to them via one URL rather than having to post 5 addresses in a list? Same question stands for kibana, at the moment our logstash talks to all 5 elastic nodes and kibana talks to one master, however if that master goes away then the cluster stops functioning.

Is it possible to balance the elasticsearch nodes via Nginx?

I have been attempting this all morning and keep running into Unrecongised SSL errors:

[2018-02-16T10:52:53,864][ERROR][logstash.agent ] Pipeline aborted due to error {:exception=>#<Manticore::UnknownException: Unrecognized SSL message, plaintext connection?>,

``

A snippet from the logstash output config:

output {
if [fields][log_type] == “syslog” or [fields][log_type] == “dpkg_log” or [fields][log_type] == “kern_log” {
elasticsearch {
hosts => [“x:9200”]
index => “system-logs-%{+YYYY.MM.dd}”
user => x
password => x
ssl => true
ssl_certificate_verification => true
cacert => “/etc/pki/logstash/ca-bundle.pem”
}

``

I am using the following nginx config:

upstream elasticsearch {
server 172.71.0.251:9200;
server 172.71.0.175:9200;
server 172.71.0.235:9200;
server 172.71.0.149:9200;
server 172.71.0.233:9200;
}

server {

listen 172.71.0.20:9200;
server_name x;

error_log elasticsearch-errors.log;
access_log elasticsearch.log;

location / {

# Deny access to Cluster API
if ($request_filename ~ "_cluster") {
  return 403;
  break;
}


# Pass requests to ElasticSearch
proxy_pass https://elasticsearch;


proxy_ssl_verify       on;
proxy_ssl_trusted_certificate /etc/ssl/certs/ca-bundle.pem;
proxy_ssl_session_reuse on;
proxy_ssl_verify_depth  2;
proxy_ssl_protocols           TLSv1 TLSv1.1 TLSv1.2;    


proxy_http_version 1.1;
proxy_set_header  Connection "Keep-Alive";
proxy_set_header  Proxy-Connection "Keep-Alive";
proxy_set_header  X-Real-IP  $remote_addr;
proxy_set_header  X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header  Host $http_host;


# Authorize access
auth_basic           "Username and Password Required";
auth_basic_user_file /etc/nginx/.htpasswd;


# Route all requests to the root index
rewrite ^(.*)$ $1 break;
rewrite_log on;


return 403;

}
}

``

Is this even possible?