Unable to connect to ElasticSearch from logstash

I was able to setup ElasticSearch and Kibana with Search Guard 2. However I was not able to do the same with logstash.

Here is the testing configuration that I use on Logstash : ‘input { stdin { } } output { elasticsearch { hosts => “127.0.0.1:9200” user => “admin” password => “admin” ssl => true cacert => ****/root-ca.pem"}}’

I setup elasticsearch to respond to elasticsearch.elk.test through nginx, so I’m not sure what I should put in hosts ( 127.0.0.1 or elasticsearch.elk.test)

If I put 127.0.0.1 as hosts, then I encounter this error:

SSL peer shut down incorrectly {:class=>“Manticore::ClientProtocolException”, :backtrace=>[“/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:in call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in call_once’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in perform_request’”, “org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in safe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in start_workers’”], :level=>:warn}

Attempted to send a bulk request to Elasticsearch configured at ‘[“https://127.0.0.1:9200/”]’, but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>“SSL peer shut down incorrectly”, :error_class=>“Manticore::ClientProtocolException”, :backtrace=>[“/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in `initialize’”, “org/jruby/RubyProc.java:281:in `call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in `call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in `call_once’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in `code’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request’”, “org/jruby/RubyProc.java:281:in `call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in `perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in `bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk’”, “org/jruby/ext/thread/Mutex.java:149:in `synchronize’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in `safe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive’”, “org/jruby/RubyArray.java:1653:in `each_slice’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch’”, “org/jruby/RubyHash.java:1342:in `each’”, "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in

output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop’“, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in `start_workers’”], :client_config=>{:hosts=>[“https://127.0.0.1:9200/”], :ssl=>{:enabled=>true, :ca_file=>”/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem"}, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{:enabled=>true, :ca_file=>“/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem”}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{“Authorization”=>“Basic YWRtaW46YWRtaW4=”}, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

When I use elasticsearch.elk.test, this error shows up

elasticsearch.elk.test {:class=>“Manticore::ResolutionFailure”, :backtrace=>[“/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:in call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in call_once’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in perform_request’”, “org/jruby/$ubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in safe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in
multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in start_workers’”], :level=>:warn}
Attempted to send a bulk request to Elasticsearch configured at ‘[“https://elasticsearch.elk.test:9200/”]’, but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>“elasticsearch.elk.test”, :error_class=>“Manticore::ResolutionFailure”, :backtrace=>[“/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in initialize'", "org/jruby/RubyProc.java:281:in call’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in call_once’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in perform_request’”, “org/jruby/RubyProc.java:281:in call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in perform_request’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in non_threadsafe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "org/jruby/ext/thread/Mutex.java:149:in synchronize’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in safe_bulk’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in retrying_submit’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in multi_receive'", "org/jruby/RubyArray.java:1653:in each_slice’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in worker_multi_receive’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “org/jruby/RubyHash.java:1342:in each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in output_batch’”, “/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in start_workers’”], :client_config=>{:hosts=>[“https://elasticsearch.elk.test:9200/”], :ssl=>{:enabled=>true, :ca_file=>“/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem”}, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{:enabled=>true, :ca_file=>“/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem”}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{“Authorization”=>“Basic YWRtaW46YWRtaW4=”}, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

Thank you very much for your help

can you try

openssl s_client -connect 127.0.0.1:9200
openssl s_client -connect elasticsearch.elk.test:9200

and also try opening https://elasticsearch.elk.test:9200 or https:/127.0.01:9200 in a browser?

Would also be great if you can post elasticsearch logs on DEBUG level (while you face the below mentioned error)

···

Am 28.06.2016 um 17:14 schrieb my@apptegy.com:

I was able to setup ElasticSearch and Kibana with Search Guard 2. However I was not able to do the same with logstash.

Here is the testing configuration that I use on Logstash : 'input { stdin { } } output { elasticsearch { hosts => "127.0.0.1:9200" user => "admin" password => "admin" ssl => true cacert => ****/root-ca.pem"}}'
I setup elasticsearch to respond to elasticsearch.elk.test through nginx, so I'm not sure what I should put in hosts ( 127.0.0.1 or elasticsearch.elk.test)

If I put 127.0.0.1 as hosts, then I encounter this error:
SSL peer shut down incorrectly {:class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in `start_workers'"], :level=>:warn}
Attempted to send a bulk request to Elasticsearch configured at '["https://127.0.0.1:9200/"\]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"SSL peer shut down incorrectly", :error_class=>"Manticore::ClientProtocolException", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in
`output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in `start_workers'"], :client_config=>{:hosts=>["https://127.0.0.1:9200/"\], :ssl=>{:enabled=>true, :ca_file=>"/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem"}, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{:enabled=>true, :ca_file=>"/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem"}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{"Authorization"=>"Basic YWRtaW46YWRtaW4="}, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

When I use elasticsearch.elk.test, this error shows up
elasticsearch.elk.test {:class=>"Manticore::ResolutionFailure", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'", "org/jruby/$ubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in
`multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in `start_workers'"], :level=>:warn}
Attempted to send a bulk request to Elasticsearch configured at '["https://elasticsearch.elk.test:9200/"\]', but an error occurred and it failed! Are you sure you can reach elasticsearch from this machine using the configuration provided? {:error_message=>"elasticsearch.elk.test", :error_class=>"Manticore::ResolutionFailure", :backtrace=>["/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:37:in `initialize'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:79:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:256:in `call_once'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/manticore-0.5.5-java/lib/manticore/response.rb:153:in `code'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:71:in `perform_request'", "org/jruby/RubyProc.java:281:in `call'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/base.rb:201:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/transport/http/manticore.rb:54:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-transport-1.0.15/lib/elasticsearch/transport/client.rb:125:in `perform_request'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:87:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:53:in `non_threadsafe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "org/jruby/ext/thread/Mutex.java:149:in `synchronize'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/http_client.rb:38:in `bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:163:in `safe_bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:101:in `submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:86:in `retrying_submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:29:in `multi_receive'", "org/jruby/RubyArray.java:1653:in `each_slice'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-2.5.5-java/lib/logstash/outputs/elasticsearch/common.rb:28:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:130:in `worker_multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/output_delegator.rb:114:in `multi_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "org/jruby/RubyHash.java:1342:in `each'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:293:in `output_batch'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:224:in `worker_loop'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-2.2.4-java/lib/logstash/pipeline.rb:193:in `start_workers'"], :client_config=>{:hosts=>["https://elasticsearch.elk.test:9200/"\], :ssl=>{:enabled=>true, :ca_file=>"/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem"}, :transport_options=>{:socket_timeout=>0, :request_timeout=>0, :proxy=>nil, :ssl=>{:enabled=>true, :ca_file=>"/home/ubuntu/search-guard-ssl/example-pki-scripts/ca/root-ca.pem"}}, :transport_class=>Elasticsearch::Transport::Transport::HTTP::Manticore, :headers=>{"Authorization"=>"Basic YWRtaW46YWRtaW4="}, :logger=>nil, :tracer=>nil, :reload_connections=>false, :retry_on_failure=>false, :reload_on_failure=>false, :randomize_hosts=>false}, :level=>:error}

Thank you very much for your help

--
You received this message because you are subscribed to the Google Groups "Search Guard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to search-guard+unsubscribe@googlegroups.com.
To post to this group, send email to search-guard@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/search-guard/8a796720-2a08-4424-b62e-926e5d9661d4%40googlegroups.com\.
For more options, visit https://groups.google.com/d/optout\.

I ran openssl s_client -connect 127.0.0.1:9200 and this is the output

CONNECTED(00000003)

Certificate chain
0 s:/C=US/L=thrillshare/O=Apptegy/OU=ELK/CN=52.72.23.236
i:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare
1 s:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare
i:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare
2 s:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare
i:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare
3 s:/C=US/L=thrillshare/O=Apptegy/OU=ELK/CN=52.72.23.236
i:/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare

Server certificate
-----BEGIN CERTIFICATE-----
MIID2jCCAsKgAwIBAgIBATANBgkqhkiG9w0BAQsFADBkMRMwEQYKCZImiZPyLGQB
GRYDbmV0MRcwFQYKCZImiZPyLGQBGRYHYXBwdGVneTEQMA4GA1UECgwHQXBwdGVn
eTEMMAoGA1UECwwDRUxLMRQwEgYDVQQDDAtUaHJpbGxzaGFyZTAeFw0xNjA2Mjkx
NzM3MTlaFw0xODA2MjkxNzM3MTlaMFoxCzAJBgNVBAYTAlVTMRQwEgYDVQQHEwt0
aHJpbGxzaGFyZTEQMA4GA1UEChMHQXBwdGVneTEMMAoGA1UECxMDRUxLMRUwEwYD
VQQDEww1Mi43Mi4yMy4yMzYwggEiMA0GCSqGSIb3DQEBAQUAA4IBDwAwggEKAoIB
AQCDlL155EbDbhbA0VbBiqoc98tG+tXId9yA8AClPdcsTsU85bpd2v5My7Y3i4nm
HXGrFHE9NHv+6LtwOXQSquQXckFubQtAzge7eJv/SzLjCbkho0kObTYRoexUIpCU
GUgdLcc86uWF2jVa0RD/2h5XOW5ytEAcess7DE3EOQxliN3FOdAjE9FtCiE66bO+
t/DvxOwBVr+jndiHDGjE4X242XIuqzJgtvqHuPULzVLFXfVzeLGvLdrzkghTjuht
H63/Dtq6ai6JYD9jTrbAVElWdps3y77RuO35bPpsrjPlULc57R53wNpBVDoNmJbh
3lf6Iny51SE/G12RlV5iyAjxAgMBAAGjgaAwgZ0wDgYDVR0PAQH/BAQDAgWgMAkG
A1UdEwQCMAAwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMB0GA1UdDgQW
BBQVOf8xDSWcCVp8r8qiHaRPDadUAzAfBgNVHSMEGDAWgBTOZNZZZruD4xtXYjW1
QxVudZHYxTAhBgNVHREEGjAYgglsb2NhbGhvc3SHBH8AAAGIBSoDBAUFMA0GCSqG
SIb3DQEBCwUAA4IBAQBP6Z+2d06uMyHagh4EhOFPvlgEORmQU1FzzSUrVwYz9Nvs
ts78em+ZOGOMZ+y4/O1apYchE750+nLgNGxT6bhDzHMQ6g9xVwUE+y7Ki+qnwVZn
j85dmlqvVM+LrklEwjUhAOnUvng6uLDJYfqHYC861FOXJXYqwpYvL2EHNb3Tx4hC
unz3djVTZby0kldlmDYeeNaTfIEDV1GwO6HvW2wzB7iiEymII2y0XWFqfSIRN0Wb
CDBOCbyKlK5bey15mmrQfrIQh3N84VB5Mazg8TLxZlxbLt+gxURCfiTG4x5tjk8J
Ot04HRc5ojeqGat24JPNlLaSlq3lOoRJ1dx/Iaow
-----END CERTIFICATE-----
subject=/C=US/L=thrillshare/O=Apptegy/OU=ELK/CN=52.72.23.236
issuer=/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare

Acceptable client certificate CA names
/DC=net/DC=apptegy/O=Apptegy/OU=ELK/CN=Thrillshare

SSL handshake has read 4614 bytes and written 557 bytes

New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES256-SHA384
Server public key is 2048 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol
: TLSv1.2
Cipher
: ECDHE-RSA-AES256-SHA384
Session-ID: 577426F62EE917A3CDA5CFA8E43BFBACA8D5C58D7ACA233639940284F7872837
Session-ID-ctx:·
Master-Key: 6A9EEF97083F6901F8CF7699A4D7B2FCE43DA6ED4FFDC7D33CC58877E55E12B664D625E850CDA88BB78FB2EFA85F7A9C
Key-Arg
: None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1467229942
Timeout
: 300 (sec)
Verify return code: 19 (self signed certificate in certificate chain)
This is what I have when running ES on debug level

[2016-06-29 20:03:18,306][DEBUG][bootstrap ] Linux seccomp filter installation successful, threads: [all]

[2016-06-29 20:03:18,311][DEBUG][bootstrap ] java.class.path: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar:/usr/share/elasticsearch/lib/spatial4j-0.5.jar:/usr/share/elasticsearch/lib/joda-convert-1.2.jar:/usr/share/elasticsearch/lib/log4j-1.2.17.jar:/usr/share/elasticsearch/lib/commons-cli-1.3.1.jar:/usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar:/usr/share/elasticsearch/lib/jsr166e-1.1.0.jar:/usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar:/usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar:/usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar:/usr/share/elasticsearch/lib/hppc-0.7.1.jar:/usr/share/elasticsearch/lib/jackson-core-2.6.6.jar:/usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar:/usr/share/elasticsearch/lib/compiler-0.8.13.jar:/usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar:/usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar:/usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar:/usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar:/usr/share/elasticsearch/lib/jna-4.1.0.jar:/usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar:/usr/share/elasticsearch/lib/netty-3.10.5.Final.jar:/usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar:/usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar:/usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar:/usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar:/usr/share/elasticsearch/lib/snakeyaml-1.15.jar:/usr/share/elasticsearch/lib/jts-1.13.jar:/usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar:/usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar:/usr/share/elasticsearch/lib/t-digest-3.0.jar:/usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar:/usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar:/usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar:/usr/share/elasticsearch/lib/joda-time-2.8.2.jar:/usr/share/elasticsearch/lib/lucene-join-5.5.0.jar:/usr/share/elasticsearch/lib/lucene-core-5.5.0.jar:/usr/share/elasticsearch/lib/guava-18.0.jar:/usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:18,312][DEBUG][bootstrap ] sun.boot.class.path: /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/resources.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/rt.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/sunrsasign.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jsse.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jce.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/charsets.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/rhino.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/jfr.jar:/usr/lib/jvm/java-7-openjdk-amd64/jre/classes

[2016-06-29 20:03:18,312][DEBUG][bootstrap ] classloader urls: [file:/usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar, file:/usr/share/elasticsearch/lib/spatial4j-0.5.jar, file:/usr/share/elasticsearch/lib/joda-convert-1.2.jar, file:/usr/share/elasticsearch/lib/log4j-1.2.17.jar, file:/usr/share/elasticsearch/lib/commons-cli-1.3.1.jar, file:/usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar, file:/usr/share/elasticsearch/lib/jsr166e-1.1.0.jar, file:/usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar, file:/usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar, file:/usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar, file:/usr/share/elasticsearch/lib/hppc-0.7.1.jar, file:/usr/share/elasticsearch/lib/jackson-core-2.6.6.jar, file:/usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar, file:/usr/share/elasticsearch/lib/compiler-0.8.13.jar, file:/usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar, file:/usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar, file:/usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar, file:/usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar, file:/usr/share/elasticsearch/lib/jna-4.1.0.jar, file:/usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar, file:/usr/share/elasticsearch/lib/netty-3.10.5.Final.jar, file:/usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar, file:/usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar, file:/usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar, file:/usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar, file:/usr/share/elasticsearch/lib/snakeyaml-1.15.jar, file:/usr/share/elasticsearch/lib/jts-1.13.jar, file:/usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar, file:/usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar, file:/usr/share/elasticsearch/lib/t-digest-3.0.jar, file:/usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar, file:/usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar, file:/usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar, file:/usr/share/elasticsearch/lib/joda-time-2.8.2.jar, file:/usr/share/elasticsearch/lib/lucene-join-5.5.0.jar, file:/usr/share/elasticsearch/lib/lucene-core-5.5.0.jar, file:/usr/share/elasticsearch/lib/guava-18.0.jar, file:/usr/share/elasticsearch/lib/securesm-1.0.jar]

[2016-06-29 20:03:18,325][DEBUG][bootstrap ] java.home: /usr/lib/jvm/java-7-openjdk-amd64/jre

[2016-06-29 20:03:18,325][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:18,367][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/spatial4j-0.5.jar

[2016-06-29 20:03:18,368][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-convert-1.2.jar

[2016-06-29 20:03:18,369][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/log4j-1.2.17.jar

[2016-06-29 20:03:18,371][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/commons-cli-1.3.1.jar

[2016-06-29 20:03:18,377][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar

[2016-06-29 20:03:18,385][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jsr166e-1.1.0.jar

[2016-06-29 20:03:18,385][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar

[2016-06-29 20:03:18,391][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar

[2016-06-29 20:03:18,400][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar

[2016-06-29 20:03:18,412][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/hppc-0.7.1.jar

[2016-06-29 20:03:18,430][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-core-2.6.6.jar

[2016-06-29 20:03:18,433][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar

[2016-06-29 20:03:18,438][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compiler-0.8.13.jar

[2016-06-29 20:03:18,439][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar

[2016-06-29 20:03:18,441][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar

[2016-06-29 20:03:18,442][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar

[2016-06-29 20:03:18,451][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar

[2016-06-29 20:03:18,452][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jna-4.1.0.jar

[2016-06-29 20:03:18,453][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar

[2016-06-29 20:03:18,454][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/netty-3.10.5.Final.jar

[2016-06-29 20:03:18,464][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar

[2016-06-29 20:03:18,465][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar

[2016-06-29 20:03:18,465][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar

[2016-06-29 20:03:18,466][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar

[2016-06-29 20:03:18,479][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/snakeyaml-1.15.jar

[2016-06-29 20:03:18,480][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jts-1.13.jar

[2016-06-29 20:03:18,482][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar

[2016-06-29 20:03:18,482][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar

[2016-06-29 20:03:18,491][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/t-digest-3.0.jar

[2016-06-29 20:03:18,492][DEBUG][bootstrap ] excluding duplicate classpath element: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:18,492][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar

[2016-06-29 20:03:18,499][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar

[2016-06-29 20:03:18,500][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-time-2.8.2.jar

[2016-06-29 20:03:18,501][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-join-5.5.0.jar

[2016-06-29 20:03:18,502][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-core-5.5.0.jar

[2016-06-29 20:03:18,513][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/guava-18.0.jar

[2016-06-29 20:03:18,523][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:18,857][INFO ][node ] [Jonathan Richards] version[2.3.3], pid[4904], build[218bdf1/2016-05-17T15:40:04Z]

[2016-06-29 20:03:18,857][INFO ][node ] [Jonathan Richards] initializing …

[2016-06-29 20:03:18,858][DEBUG][node ] [Jonathan Richards] using config [/etc/elasticsearch], data [[/var/lib/elasticsearch]], logs [/var/log/elasticsearch], plugins [/usr/share/elasticsearch/plugins]

[2016-06-29 20:03:18,883][DEBUG][bootstrap ] java.home: /usr/lib/jvm/java-7-openjdk-amd64/jre

[2016-06-29 20:03:18,883][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:18,893][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/spatial4j-0.5.jar

[2016-06-29 20:03:18,899][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-convert-1.2.jar

[2016-06-29 20:03:18,899][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/log4j-1.2.17.jar

[2016-06-29 20:03:18,900][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/commons-cli-1.3.1.jar

[2016-06-29 20:03:18,902][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar

[2016-06-29 20:03:18,903][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jsr166e-1.1.0.jar

[2016-06-29 20:03:18,903][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar

[2016-06-29 20:03:18,903][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar

[2016-06-29 20:03:18,911][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar

[2016-06-29 20:03:18,913][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/hppc-0.7.1.jar

[2016-06-29 20:03:18,914][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-core-2.6.6.jar

[2016-06-29 20:03:18,914][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar

[2016-06-29 20:03:18,919][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compiler-0.8.13.jar

[2016-06-29 20:03:18,919][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar

[2016-06-29 20:03:18,920][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar

[2016-06-29 20:03:18,920][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar

[2016-06-29 20:03:18,921][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar

[2016-06-29 20:03:18,921][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jna-4.1.0.jar

[2016-06-29 20:03:18,922][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar

[2016-06-29 20:03:18,922][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/netty-3.10.5.Final.jar

[2016-06-29 20:03:18,932][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar

[2016-06-29 20:03:18,932][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar

[2016-06-29 20:03:18,933][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar

[2016-06-29 20:03:18,933][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar

[2016-06-29 20:03:18,934][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/snakeyaml-1.15.jar

[2016-06-29 20:03:18,943][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jts-1.13.jar

[2016-06-29 20:03:18,943][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar

[2016-06-29 20:03:18,951][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar

[2016-06-29 20:03:18,952][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/t-digest-3.0.jar

[2016-06-29 20:03:18,952][DEBUG][bootstrap ] excluding duplicate classpath element: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:18,952][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar

[2016-06-29 20:03:18,953][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar

[2016-06-29 20:03:18,953][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-time-2.8.2.jar

[2016-06-29 20:03:18,954][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-join-5.5.0.jar

[2016-06-29 20:03:18,954][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-core-5.5.0.jar

[2016-06-29 20:03:18,963][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/guava-18.0.jar

[2016-06-29 20:03:18,965][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:18,965][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/reindex/reindex-2.3.3.jar

[2016-06-29 20:03:19,216][DEBUG][bootstrap ] java.home: /usr/lib/jvm/java-7-openjdk-amd64/jre

[2016-06-29 20:03:19,216][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,228][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/spatial4j-0.5.jar

[2016-06-29 20:03:19,228][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-convert-1.2.jar

[2016-06-29 20:03:19,229][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/log4j-1.2.17.jar

[2016-06-29 20:03:19,229][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/commons-cli-1.3.1.jar

[2016-06-29 20:03:19,230][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar

[2016-06-29 20:03:19,237][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jsr166e-1.1.0.jar

[2016-06-29 20:03:19,237][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar

[2016-06-29 20:03:19,238][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar

[2016-06-29 20:03:19,238][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar

[2016-06-29 20:03:19,239][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/hppc-0.7.1.jar

[2016-06-29 20:03:19,240][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-core-2.6.6.jar

[2016-06-29 20:03:19,241][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar

[2016-06-29 20:03:19,241][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compiler-0.8.13.jar

[2016-06-29 20:03:19,241][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar

[2016-06-29 20:03:19,242][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar

[2016-06-29 20:03:19,242][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar

[2016-06-29 20:03:19,251][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar

[2016-06-29 20:03:19,251][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jna-4.1.0.jar

[2016-06-29 20:03:19,252][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar

[2016-06-29 20:03:19,252][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/netty-3.10.5.Final.jar

[2016-06-29 20:03:19,253][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar

[2016-06-29 20:03:19,254][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar

[2016-06-29 20:03:19,254][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar

[2016-06-29 20:03:19,256][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar

[2016-06-29 20:03:19,257][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/snakeyaml-1.15.jar

[2016-06-29 20:03:19,257][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jts-1.13.jar

[2016-06-29 20:03:19,258][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar

[2016-06-29 20:03:19,258][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar

[2016-06-29 20:03:19,259][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/t-digest-3.0.jar

[2016-06-29 20:03:19,259][DEBUG][bootstrap ] excluding duplicate classpath element: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,259][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar

[2016-06-29 20:03:19,260][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar

[2016-06-29 20:03:19,260][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-time-2.8.2.jar

[2016-06-29 20:03:19,261][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-join-5.5.0.jar

[2016-06-29 20:03:19,261][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-core-5.5.0.jar

[2016-06-29 20:03:19,263][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/guava-18.0.jar

[2016-06-29 20:03:19,264][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:19,265][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-expression/asm-commons-5.0.4.jar

[2016-06-29 20:03:19,265][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-expression/lang-expression-2.3.3.jar

[2016-06-29 20:03:19,265][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-expression/asm-5.0.4.jar

[2016-06-29 20:03:19,266][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-expression/lucene-expressions-5.5.0.jar

[2016-06-29 20:03:19,266][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-expression/antlr4-runtime-4.5.1-1.jar

[2016-06-29 20:03:19,491][DEBUG][bootstrap ] java.home: /usr/lib/jvm/java-7-openjdk-amd64/jre

[2016-06-29 20:03:19,491][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,496][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/spatial4j-0.5.jar

[2016-06-29 20:03:19,496][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-convert-1.2.jar

[2016-06-29 20:03:19,496][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/log4j-1.2.17.jar

[2016-06-29 20:03:19,497][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/commons-cli-1.3.1.jar

[2016-06-29 20:03:19,497][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar

[2016-06-29 20:03:19,498][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jsr166e-1.1.0.jar

[2016-06-29 20:03:19,498][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar

[2016-06-29 20:03:19,499][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar

[2016-06-29 20:03:19,499][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar

[2016-06-29 20:03:19,500][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/hppc-0.7.1.jar

[2016-06-29 20:03:19,501][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-core-2.6.6.jar

[2016-06-29 20:03:19,501][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar

[2016-06-29 20:03:19,502][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compiler-0.8.13.jar

[2016-06-29 20:03:19,502][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar

[2016-06-29 20:03:19,503][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar

[2016-06-29 20:03:19,503][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar

[2016-06-29 20:03:19,504][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar

[2016-06-29 20:03:19,504][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jna-4.1.0.jar

[2016-06-29 20:03:19,504][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar

[2016-06-29 20:03:19,505][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/netty-3.10.5.Final.jar

[2016-06-29 20:03:19,506][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar

[2016-06-29 20:03:19,507][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar

[2016-06-29 20:03:19,507][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar

[2016-06-29 20:03:19,511][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar

[2016-06-29 20:03:19,512][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/snakeyaml-1.15.jar

[2016-06-29 20:03:19,512][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jts-1.13.jar

[2016-06-29 20:03:19,513][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar

[2016-06-29 20:03:19,513][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar

[2016-06-29 20:03:19,514][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/t-digest-3.0.jar

[2016-06-29 20:03:19,514][DEBUG][bootstrap ] excluding duplicate classpath element: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,514][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar

[2016-06-29 20:03:19,514][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar

[2016-06-29 20:03:19,519][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-time-2.8.2.jar

[2016-06-29 20:03:19,521][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-join-5.5.0.jar

[2016-06-29 20:03:19,522][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-core-5.5.0.jar

[2016-06-29 20:03:19,523][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/guava-18.0.jar

[2016-06-29 20:03:19,525][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:19,525][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-groovy/groovy-2.4.6-indy.jar

[2016-06-29 20:03:19,531][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/modules/lang-groovy/lang-groovy-2.3.3.jar

[2016-06-29 20:03:19,546][DEBUG][bootstrap ] java.home: /usr/lib/jvm/java-7-openjdk-amd64/jre

[2016-06-29 20:03:19,546][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,551][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/spatial4j-0.5.jar

[2016-06-29 20:03:19,551][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-convert-1.2.jar

[2016-06-29 20:03:19,559][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/log4j-1.2.17.jar

[2016-06-29 20:03:19,559][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/commons-cli-1.3.1.jar

[2016-06-29 20:03:19,560][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/apache-log4j-extras-1.2.17.jar

[2016-06-29 20:03:19,560][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jsr166e-1.1.0.jar

[2016-06-29 20:03:19,561][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-misc-5.5.0.jar

[2016-06-29 20:03:19,561][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-yaml-2.6.6.jar

[2016-06-29 20:03:19,562][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-analyzers-common-5.5.0.jar

[2016-06-29 20:03:19,563][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/hppc-0.7.1.jar

[2016-06-29 20:03:19,564][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-core-2.6.6.jar

[2016-06-29 20:03:19,564][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-sandbox-5.5.0.jar

[2016-06-29 20:03:19,565][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compiler-0.8.13.jar

[2016-06-29 20:03:19,565][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queryparser-5.5.0.jar

[2016-06-29 20:03:19,566][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-suggest-5.5.0.jar

[2016-06-29 20:03:19,566][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-grouping-5.5.0.jar

[2016-06-29 20:03:19,566][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/compress-lzf-1.0.2.jar

[2016-06-29 20:03:19,568][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jna-4.1.0.jar

[2016-06-29 20:03:19,569][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial-5.5.0.jar

[2016-06-29 20:03:19,569][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/netty-3.10.5.Final.jar

[2016-06-29 20:03:19,571][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-smile-2.6.6.jar

[2016-06-29 20:03:19,571][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jackson-dataformat-cbor-2.6.6.jar

[2016-06-29 20:03:19,571][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-memory-5.5.0.jar

[2016-06-29 20:03:19,572][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-backward-codecs-5.5.0.jar

[2016-06-29 20:03:19,572][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/snakeyaml-1.15.jar

[2016-06-29 20:03:19,573][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/jts-1.13.jar

[2016-06-29 20:03:19,573][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/HdrHistogram-2.1.6.jar

[2016-06-29 20:03:19,576][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-highlighter-5.5.0.jar

[2016-06-29 20:03:19,576][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/t-digest-3.0.jar

[2016-06-29 20:03:19,576][DEBUG][bootstrap ] excluding duplicate classpath element: /usr/share/elasticsearch/lib/elasticsearch-2.3.3.jar

[2016-06-29 20:03:19,577][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-queries-5.5.0.jar

[2016-06-29 20:03:19,577][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-spatial3d-5.5.0.jar

[2016-06-29 20:03:19,577][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/joda-time-2.8.2.jar

[2016-06-29 20:03:19,578][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-join-5.5.0.jar

[2016-06-29 20:03:19,579][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/lucene-core-5.5.0.jar

[2016-06-29 20:03:19,580][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/guava-18.0.jar

[2016-06-29 20:03:19,582][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/lib/securesm-1.0.jar

[2016-06-29 20:03:19,582][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/netty-buffer-4.0.37.Final.jar

[2016-06-29 20:03:19,582][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/search-guard-ssl-2.3.3.13.jar

[2016-06-29 20:03:19,583][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/netty-handler-4.0.37.Final.jar

[2016-06-29 20:03:19,583][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/netty-transport-4.0.37.Final.jar

[2016-06-29 20:03:19,584][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/netty-common-4.0.37.Final.jar

[2016-06-29 20:03:19,585][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-ssl/netty-codec-4.0.37.Final.jar

[2016-06-29 20:03:19,585][DEBUG][bootstrap ] examining jar: /usr/share/elasticsearch/plugins/search-guard-2/search-guard-2-2.3.3.1.jar

[2016-06-29 20:03:19,633][DEBUG][io.netty.util.internal.logging.InternalLoggerFactory] Using Log4J as the default logging framework

[2016-06-29 20:03:19,639][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.Buffer.address: available

[2016-06-29 20:03:19,640][DEBUG][io.netty.util.internal.PlatformDependent0] sun.misc.Unsafe.theUnsafe: available

[2016-06-29 20:03:19,640][DEBUG][io.netty.util.internal.PlatformDependent0] sun.misc.Unsafe.copyMemory: available

[2016-06-29 20:03:19,641][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.Bits.unaligned: true

[2016-06-29 20:03:19,641][DEBUG][io.netty.util.internal.PlatformDependent0] java.nio.DirectByteBuffer.(long, int): available

[2016-06-29 20:03:19,644][DEBUG][io.netty.util.internal.PlatformDependent] Java version: 7

[2016-06-29 20:03:19,644][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.noUnsafe: false

[2016-06-29 20:03:19,644][DEBUG][io.netty.util.internal.PlatformDependent] sun.misc.Unsafe: available

[2016-06-29 20:03:19,644][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.noJavassist: false

[2016-06-29 20:03:19,645][DEBUG][io.netty.util.internal.PlatformDependent] Javassist: unavailable

[2016-06-29 20:03:19,645][DEBUG][io.netty.util.internal.PlatformDependent] You don’t have Javassist in your class path or you don’t have enough permission to load dynamically generated classes. Please check the configuration for better performance.

[2016-06-29 20:03:19,646][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.tmpdir: /tmp (java.io.tmpdir)

[2016-06-29 20:03:19,646][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.bitMode: 64 (sun.arch.data.model)

[2016-06-29 20:03:19,646][DEBUG][io.netty.util.internal.PlatformDependent] -Dio.netty.noPreferDirect: false

[2016-06-29 20:03:19,646][DEBUG][io.netty.util.internal.PlatformDependent] io.netty.maxDirectMemory: 1065025536 bytes

[2016-06-29 20:03:19,649][DEBUG][io.netty.handler.ssl.OpenSsl] netty-tcnative not in the classpath; OpenSslEngine will be unavailable.

[2016-06-29 20:03:19,665][INFO ][com.floragunn.searchguard.ssl.SearchGuardSSLPlugin] Search Guard 2 plugin also available

[2016-06-29 20:03:19,685][INFO ][plugins ] [Jonathan Richards] modules [lang-groovy, reindex, lang-expression], plugins [search-guard-2, search-guard-ssl], sites

[2016-06-29 20:03:19,739][DEBUG][env ] [Jonathan Richards] using node location [[NodePath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0, spins=false}]], local_node_id [0]

[2016-06-29 20:03:19,743][DEBUG][env ] [Jonathan Richards] node data locations details:

→ /var/lib/elasticsearch/thrillshare_cluster1/nodes/0, free_space [5.7gb], usable_space [5.2gb], total_space [7.7gb], spins? [no], mount [/ (/dev/xvda1)], type [ext4]

[2016-06-29 20:03:19,743][INFO ][env ] [Jonathan Richards] heap size [1015.6mb], compressed ordinary object pointers [true]

[2016-06-29 20:03:19,743][WARN ][env ] [Jonathan Richards] max file descriptors [65535] for elasticsearch process likely too low, consider increasing to at least [65536]

[2016-06-29 20:03:19,765][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [generic], type [cached], keep_alive [30s]

[2016-06-29 20:03:19,771][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [index], type [fixed], size [1], queue_size [200]

[2016-06-29 20:03:19,773][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [fetch_shard_store], type [scaling], min [1], size [2], keep_alive [5m]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [get], type [fixed], size [1], queue_size [1k]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [snapshot], type [scaling], min [1], size [1], keep_alive [5m]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [force_merge], type [fixed], size [1], queue_size [null]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [suggest], type [fixed], size [1], queue_size [1k]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [bulk], type [fixed], size [1], queue_size [50]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [warmer], type [scaling], min [1], size [1], keep_alive [5m]

[2016-06-29 20:03:19,774][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [flush], type [scaling], min [1], size [1], keep_alive [5m]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [search], type [fixed], size [2], queue_size [1k]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [fetch_shard_started], type [scaling], min [1], size [2], keep_alive [5m]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [listener], type [fixed], size [1], queue_size [null]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [percolate], type [fixed], size [1], queue_size [1k]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [management], type [scaling], min [1], size [5], keep_alive [5m]

[2016-06-29 20:03:19,775][DEBUG][threadpool ] [Jonathan Richards] creating thread_pool [refresh], type [scaling], min [1], size [1], keep_alive [5m]

[2016-06-29 20:03:19,789][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Open SSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: org.apache.tomcat.jni.SSL

[2016-06-29 20:03:19,789][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Open SSL not available (this is not an error, we simply fallback to built-in JDK SSL) because of java.lang.ClassNotFoundException: org.apache.tomcat.jni.SSL

[2016-06-29 20:03:20,183][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Config directory is /etc/elasticsearch/, from there the key- and truststore files are resolved relatively

[2016-06-29 20:03:20,216][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] Alias thrillshare_1: is a certificate entry?false/is a key entry?true

[2016-06-29 20:03:20,216][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] No alias given, will trust all of the certificates in the store

[2016-06-29 20:03:20,228][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Transport keystore subject DN no. 0 CN=52.72.23.236, OU=ELK, O=Apptegy, L=thrillshare, C=US

[2016-06-29 20:03:20,228][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Transport keystore subject DN no. 1 CN=Thrillshare, OU=ELK, O=Apptegy, DC=apptegy, DC=net

[2016-06-29 20:03:20,228][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Transport keystore subject DN no. 2 CN=Thrillshare, OU=ELK, O=Apptegy, DC=apptegy, DC=net

[2016-06-29 20:03:20,228][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] Transport keystore subject DN no. 3 CN=52.72.23.236, OU=ELK, O=Apptegy, L=thrillshare, C=US

[2016-06-29 20:03:20,229][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] Alias root-ca-chain: is a certificate entry?true/is a key entry?false

[2016-06-29 20:03:20,229][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] No alias given, will trust all of the certificates in the store

[2016-06-29 20:03:20,278][DEBUG][io.netty.handler.ssl.JdkSslContext] Default protocols (JDK): [TLSv1.2, TLSv1.1, TLSv1]

[2016-06-29 20:03:20,279][DEBUG][io.netty.handler.ssl.JdkSslContext] Default cipher suites (JDK): [TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_RSA_WITH_AES_128_CBC_SHA, TLS_RSA_WITH_AES_256_CBC_SHA, SSL_RSA_WITH_3DES_EDE_CBC_SHA]

[2016-06-29 20:03:20,288][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] HTTPS client auth mode OPTIONAL

[2016-06-29 20:03:20,289][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] Alias thrillshare_1: is a certificate entry?false/is a key entry?true

[2016-06-29 20:03:20,289][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] No alias given, will trust all of the certificates in the store

[2016-06-29 20:03:20,289][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] HTTP keystore subject DN no. 0 CN=52.72.23.236, OU=ELK, O=Apptegy, L=thrillshare, C=US

[2016-06-29 20:03:20,289][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] HTTP keystore subject DN no. 1 CN=Thrillshare, OU=ELK, O=Apptegy, DC=apptegy, DC=net

[2016-06-29 20:03:20,289][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] HTTP keystore subject DN no. 2 CN=Thrillshare, OU=ELK, O=Apptegy, DC=apptegy, DC=net

[2016-06-29 20:03:20,289][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] HTTP keystore subject DN no. 3 CN=52.72.23.236, OU=ELK, O=Apptegy, L=thrillshare, C=US

[2016-06-29 20:03:20,290][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] Alias root-ca-chain: is a certificate entry?true/is a key entry?false

[2016-06-29 20:03:20,290][DEBUG][com.floragunn.searchguard.ssl.util.SSLCertificateHelper] No alias given, will trust all of the certificates in the store

[2016-06-29 20:03:20,291][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] sslTransportClientProvider:JDK with ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA]

[2016-06-29 20:03:20,291][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] sslTransportServerProvider:JDK with ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA]

[2016-06-29 20:03:20,292][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] sslHTTPProvider:JDK with ciphers [TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384, TLS_DHE_RSA_WITH_AES_256_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_RSA_WITH_AES_256_CBC_SHA, TLS_DHE_DSS_WITH_AES_256_CBC_SHA, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_RSA_WITH_AES_128_CBC_SHA256, TLS_DHE_DSS_WITH_AES_128_CBC_SHA256, TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA, TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA, TLS_DHE_RSA_WITH_AES_128_CBC_SHA]

[2016-06-29 20:03:20,292][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] sslTransport protocols [TLSv1.2, TLSv1.1]

[2016-06-29 20:03:20,292][INFO ][com.floragunn.searchguard.ssl.SearchGuardKeyStore] sslHTTP protocols [TLSv1.2, TLSv1.1]

[2016-06-29 20:03:20,525][INFO ][http ] [Jonathan Richards] Using [org.elasticsearch.http.netty.NettyHttpServerTransport] as http transport, overridden by [search-guard2]

[2016-06-29 20:03:20,621][INFO ][com.floragunn.searchguard.configuration.ConfigurationModule] FLS/DLS valve not bound (noop)

[2016-06-29 20:03:20,622][INFO ][com.floragunn.searchguard.auditlog.AuditLogModule] Auditlog not available

[2016-06-29 20:03:20,731][INFO ][transport ] [Jonathan Richards] Using [com.floragunn.searchguard.transport.SearchGuardTransportService] as transport service, overridden by [search-guard2]

[2016-06-29 20:03:20,731][INFO ][transport ] [Jonathan Richards] Using [com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] as transport, overridden by [search-guard-ssl]

[2016-06-29 20:03:22,081][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,082][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,082][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,092][DEBUG][common.network ] configuration:

lo

inet 127.0.0.1 netmask:255.255.255.255 scope:host

inet6 ::1 prefixlen:128 scope:host

UP LOOPBACK mtu:65536 index:1

eth0

inet 10.0.63.145 netmask:255.255.248.0 broadcast:10.0.63.255 scope:site

inet6 fe80::1008:d2ff:fe57:71e1 prefixlen:64 scope:link

hardware 12:08:D2:57:71:E1

UP MULTICAST mtu:9001 index:2

[2016-06-29 20:03:22,100][DEBUG][common.netty ] using gathering [true]

[2016-06-29 20:03:22,118][DEBUG][discovery.zen.elect ] [Jonathan Richards] using minimum_master_nodes [-1]

[2016-06-29 20:03:22,119][DEBUG][discovery.zen.ping.unicast] [Jonathan Richards] using initial hosts [127.0.0.1, [::1]], with concurrent_connects [10]

[2016-06-29 20:03:22,124][DEBUG][discovery.zen ] [Jonathan Richards] using ping.timeout [3s], join.timeout [1m], master_election.filter_client [true], master_election.filter_data [false]

[2016-06-29 20:03:22,125][DEBUG][discovery.zen.fd ] [Jonathan Richards] [master] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]

[2016-06-29 20:03:22,127][DEBUG][discovery.zen.fd ] [Jonathan Richards] [node ] uses ping_interval [1s], ping_timeout [30s], ping_retries [3]

[2016-06-29 20:03:22,184][DEBUG][script ] [Jonathan Richards] using script cache with max_size [100], expire [null]

[2016-06-29 20:03:22,264][DEBUG][monitor.jvm ] [Jonathan Richards] enabled [true], interval [1s], gc_threshold [{old=GcThreshold{name=‘old’, warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, default=GcThreshold{name=‘default’, warnThreshold=10000, infoThreshold=5000, debugThreshold=2000}, young=GcThreshold{name=‘young’, warnThreshold=1000, infoThreshold=700, debugThreshold=400}}]

[2016-06-29 20:03:22,266][DEBUG][monitor.os ] [Jonathan Richards] Using probe [org.elasticsearch.monitor.os.OsProbe@1108d796] with refresh_interval [1s]

[2016-06-29 20:03:22,268][DEBUG][monitor.process ] [Jonathan Richards] Using probe [org.elasticsearch.monitor.process.ProcessProbe@6ba42774] with refresh_interval [1s]

[2016-06-29 20:03:22,274][DEBUG][monitor.jvm ] [Jonathan Richards] Using refresh_interval [1s]

[2016-06-29 20:03:22,274][DEBUG][monitor.fs ] [Jonathan Richards] Using probe [org.elasticsearch.monitor.fs.FsProbe@6dde8fae] with refresh_interval [1s]

[2016-06-29 20:03:22,307][DEBUG][cluster.routing.allocation.decider] [Jonathan Richards] using node_concurrent_recoveries [2], node_initial_primaries_recoveries [4]

[2016-06-29 20:03:22,309][DEBUG][cluster.routing.allocation.decider] [Jonathan Richards] using [cluster_concurrent_rebalance] with [2]

[2016-06-29 20:03:22,344][DEBUG][cluster.routing.allocation.decider] [Jonathan Richards] using [cluster.routing.allocation.allow_rebalance] with [indices_all_active]

[2016-06-29 20:03:22,347][DEBUG][gateway ] [Jonathan Richards] using initial_shards [quorum]

[2016-06-29 20:03:22,429][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,429][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,429][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,430][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.configuration.ActionGroupHolder

[2016-06-29 20:03:22,430][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.configuration.PrivilegesEvaluator

[2016-06-29 20:03:22,430][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.configuration.PrivilegesEvaluator

[2016-06-29 20:03:22,430][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.auth.internal.InternalAuthenticationBackend

[2016-06-29 20:03:22,432][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,432][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,432][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,432][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.http.XFFResolver

[2016-06-29 20:03:22,434][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Add config listener class com.floragunn.searchguard.auth.BackendRegistry

[2016-06-29 20:03:22,480][DEBUG][com.floragunn.searchguard.http.SearchGuardHttpServerTransport] [Jonathan Richards] using max_chunk_size[8kb], max_header_size[8kb], max_initial_line_length[4kb], max_content_length[100mb], receive_predictor[512kb->512kb], pipelining[true], pipelining_max_events[10000]

[2016-06-29 20:03:22,498][DEBUG][indices.recovery ] [Jonathan Richards] using max_bytes_per_sec[40mb], concurrent_streams [3], file_chunk_size [512kb], translog_size [512kb], translog_ops [1000], and compress [true]

[2016-06-29 20:03:22,509][DEBUG][indices.store ] [Jonathan Richards] using indices.store.throttle.type [NONE], with index.store.throttle.max_bytes_per_sec [10gb]

[2016-06-29 20:03:22,510][DEBUG][indices.memory ] [Jonathan Richards] using indexing buffer size [101.5mb], with indices.memory.min_shard_index_buffer_size [4mb], indices.memory.max_shard_index_buffer_size [512mb], indices.memory.shard_inactive_time [5m], indices.memory.interval [30s]

[2016-06-29 20:03:22,515][DEBUG][indices.cache.query ] [Jonathan Richards] using [node] query cache with size [10%], actual_size [101.5mb], max filter count [1000]

[2016-06-29 20:03:22,517][DEBUG][indices.fielddata.cache ] [Jonathan Richards] using size [-1] [-1b]

[2016-06-29 20:03:22,556][DEBUG][common.compress.lzf ] using decoder[VanillaChunkDecoder]

[2016-06-29 20:03:22,607][DEBUG][gateway ] [Jonathan Richards] took 18ms to load state

[2016-06-29 20:03:22,609][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,609][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,609][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,613][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,614][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,614][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,614][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US

[2016-06-29 20:03:22,614][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 1 admin DN’s [CN=Admin, OU=ELK, O=Apptegy, L=LR, ST=AR, C=US]

[2016-06-29 20:03:22,614][DEBUG][com.floragunn.searchguard.configuration.AdminDNs] Loaded 0 impersonation DN’s {}

[2016-06-29 20:03:22,621][INFO ][node ] [Jonathan Richards] initialized

[2016-06-29 20:03:22,621][INFO ][node ] [Jonathan Richards] starting …

[2016-06-29 20:03:22,636][DEBUG][netty.channel.socket.nio.SelectorUtil] Using select timeout of 500

[2016-06-29 20:03:22,636][DEBUG][netty.channel.socket.nio.SelectorUtil] Epoll-bug workaround enabled = false

[2016-06-29 20:03:22,654][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] Node client configured for SSL

[2016-06-29 20:03:22,658][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] using profile[default], worker_count[2], port[9300-9400], bind_host[null], publish_host[null], compress[false], connect_timeout[30s], connections_per_node[2/3/6/1/1], receive_predictor[512kb->512kb]

[2016-06-29 20:03:22,662][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] Node server configured for SSL

[2016-06-29 20:03:22,663][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] binding server bootstrap to: 127.0.0.1

[2016-06-29 20:03:22,684][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] Bound profile [default] to address {127.0.0.1:9300}

[2016-06-29 20:03:22,684][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] Bound profile [default] to address {[::1]:9300}

[2016-06-29 20:03:22,685][INFO ][com.floragunn.searchguard.transport.SearchGuardTransportService] [Jonathan Richards] publish_address {127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}, {[::1]:9300}

[2016-06-29 20:03:22,694][INFO ][discovery ] [Jonathan Richards] thrillshare_cluster1/chHrisWBTUeQDsxygsdXeg

[2016-06-29 20:03:22,702][DEBUG][cluster.service ] [Jonathan Richards] processing [initial_join]: execute

[2016-06-29 20:03:22,710][DEBUG][cluster.service ] [Jonathan Richards] processing [initial_join]: took 6ms no change in cluster_state

[2016-06-29 20:03:22,715][DEBUG][action.admin.cluster.health] [Jonathan Richards] no known master node, scheduling a retry

[2016-06-29 20:03:22,771][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numHeapArenas: 2

[2016-06-29 20:03:22,771][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.numDirectArenas: 2

[2016-06-29 20:03:22,771][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.pageSize: 8192

[2016-06-29 20:03:22,771][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxOrder: 11

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.chunkSize: 16777216

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.tinyCacheSize: 512

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.smallCacheSize: 256

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.normalCacheSize: 64

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.maxCachedBufferCapacity: 32768

[2016-06-29 20:03:22,772][DEBUG][io.netty.buffer.PooledByteBufAllocator] -Dio.netty.allocator.cacheTrimInterval: 8192

[2016-06-29 20:03:22,868][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] connected to node [{#zen_unicast_6#}{::1}{[::1]:9300}]

[2016-06-29 20:03:23,272][DEBUG][netty.handler.ssl.SslHandler] [id: 0x87837445, /0:0:0:0:0:0:0:1:53339 => /0:0:0:0:0:0:0:1:9300] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

[2016-06-29 20:03:23,272][DEBUG][netty.handler.ssl.SslHandler] [id: 0x1035d4c8, /0:0:0:0:0:0:0:1:53339 => /0:0:0:0:0:0:0:1:9300] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384

[2016-06-29 20:03:25,822][DEBUG][discovery.zen ] [Jonathan Richards] filtered ping responses: (filter_client[true], filter_data[false]) {none}

[2016-06-29 20:03:25,822][DEBUG][discovery.zen ] [Jonathan Richards] elected as master, waiting for incoming joins ([0] needed)

[2016-06-29 20:03:25,823][DEBUG][cluster.service ] [Jonathan Richards] processing [zen-disco-join(elected_as_master, [0] joins received)]: execute

[2016-06-29 20:03:25,827][DEBUG][com.floragunn.searchguard.ssl.transport.SearchGuardSSLNettyTransport] [Jonathan Richards] disconnecting from [{#zen_unicast_6#}{::1}{[::1]:9300}] due to explicit disconnect call

[2016-06-29 20:03:25,833][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [1], source [zen-disco-join(elected_as_master, [0] joins received)]

[2016-06-29 20:03:25,834][INFO ][cluster.service ] [Jonathan Richards] new_master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)

[2016-06-29 20:03:25,834][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [1]

[2016-06-29 20:03:25,834][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 1

[2016-06-29 20:03:25,867][DEBUG][cluster.service ] [Jonathan Richards] processing [zen-disco-join(elected_as_master, [0] joins received)]: took 43ms done applying updated cluster_state (version: 1, uuid: 00TzEKKQQIqjnavesX9hQA)

[2016-06-29 20:03:25,880][DEBUG][com.floragunn.searchguard.http.SearchGuardHttpServerTransport] [Jonathan Richards] Bound http to address {127.0.0.1:9200}

[2016-06-29 20:03:25,883][DEBUG][com.floragunn.searchguard.http.SearchGuardHttpServerTransport] [Jonathan Richards] Bound http to address {[::1]:9200}

[2016-06-29 20:03:25,884][INFO ][http ] [Jonathan Richards] publish_address {127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}, {[::1]:9200}

[2016-06-29 20:03:25,884][INFO ][node ] [Jonathan Richards] started

[2016-06-29 20:03:25,888][DEBUG][cluster.service ] [Jonathan Richards] processing [local-gateway-elected-state]: execute

[2016-06-29 20:03:25,908][DEBUG][cluster.routing.allocation.allocator] [Jonathan Richards] skipping rebalance due to in-flight shard/store fetches

[2016-06-29 20:03:25,909][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [2], source [local-gateway-elected-state]

[2016-06-29 20:03:25,909][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [2]

[2016-06-29 20:03:25,909][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 2

[2016-06-29 20:03:25,909][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][0] shard state info found: [version [24], primary [true]]

[2016-06-29 20:03:25,912][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][1] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/1], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/1]

[2016-06-29 20:03:25,914][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][3] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/3], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/3]

[2016-06-29 20:03:25,979][INFO ][gateway ] [Jonathan Richards] recovered [3] indices into cluster_state

[2016-06-29 20:03:25,979][DEBUG][cluster.service ] [Jonathan Richards] processing [local-gateway-elected-state]: took 90ms done applying updated cluster_state (version: 2, uuid: GVzZrsTERluW-TIF3OhiZQ)

[2016-06-29 20:03:25,979][DEBUG][cluster.service ] [Jonathan Richards] processing [cluster_reroute(async_shard_fetch)]: execute

[2016-06-29 20:03:25,980][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][0] found 1 allocations of [filebeat-2016.06.29][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:25,984][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][0]: allocating [[filebeat-2016.06.29][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:25,985][DEBUG][cluster.routing.allocation.allocator] [Jonathan Richards] skipping rebalance due to in-flight shard/store fetches

[2016-06-29 20:03:25,986][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [3], source [cluster_reroute(async_shard_fetch)]

[2016-06-29 20:03:25,986][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [3]

[2016-06-29 20:03:25,987][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 3

[2016-06-29 20:03:25,989][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][3] shard state info found: [version [24], primary [true]]

[2016-06-29 20:03:25,989][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][1] shard state info found: [version [24], primary [true]]

[2016-06-29 20:03:25,991][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][4] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/4], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/4]

[2016-06-29 20:03:25,994][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/2], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/2]

[2016-06-29 20:03:25,989][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29] creating index

[2016-06-29 20:03:25,999][DEBUG][indices ] [Jonathan Richards] creating Index [filebeat-2016.06.29], shards [5]/[1]

[2016-06-29 20:03:26,006][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][4] shard state info found: [version [24], primary [true]]

[2016-06-29 20:03:26,009][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2] shard state info found: [version [24], primary [true]]

[2016-06-29 20:03:26,012][DEBUG][gateway ] [Jonathan Richards] [.kibana][0] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/.kibana/0], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/.kibana/0]

[2016-06-29 20:03:26,014][DEBUG][gateway ] [Jonathan Richards] [.kibana][0] shard state info found: [version [40], primary [true]]

[2016-06-29 20:03:26,058][DEBUG][gateway ] [Jonathan Richards] [searchguard][0] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/searchguard/0], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/searchguard/0]

[2016-06-29 20:03:26,064][DEBUG][gateway ] [Jonathan Richards] [searchguard][0] shard state info found: [version [44], primary [true]]

[2016-06-29 20:03:26,156][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29] using index.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]

[2016-06-29 20:03:26,182][DEBUG][index.mapper ] [Jonathan Richards] [filebeat-2016.06.29] using dynamic[true]

[2016-06-29 20:03:26,208][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29] adding mapping [log], source [{“log”:{“properties”:{“@timestamp”:{“type”:“date”,“format”:“strict_date_optional_time||epoch_millis”},“beat”:{“properties”:{“hostname”:{“type”:“string”},“name”:{“type”:“string”}}},“count”:{“type”:“long”},“input_type”:{“type”:“string”},“message”:{“type”:“string”},“offset”:{“type”:“long”},“source”:{“type”:“string”},“type”:{“type”:“string”}}}}]

[2016-06-29 20:03:26,288][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29][0] creating shard

[2016-06-29 20:03:26,290][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][0] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/0], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/0]

[2016-06-29 20:03:26,290][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][0] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/0, indexUUID=‘n3aFwaqZTCuHhvUsOrjMxg’, shard=[filebeat-2016.06.29][0]}]

[2016-06-29 20:03:26,290][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] creating shard_id [filebeat-2016.06.29][0]

[2016-06-29 20:03:26,294][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:26,349][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29][0] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:26,350][DEBUG][index.deletionpolicy ] [Jonathan Richards] [filebeat-2016.06.29][0] Using [keep_only_last] deletion policy

[2016-06-29 20:03:26,359][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:26,359][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] state: [CREATED]

[2016-06-29 20:03:26,364][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:26,368][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:26,369][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] starting recovery from shard_store …

[2016-06-29 20:03:26,374][DEBUG][cluster.service ] [Jonathan Richards] processing [cluster_reroute(async_shard_fetch)]: took 395ms done applying updated cluster_state (version: 3, uuid: TpfGJ0QvTzKCAFdt0sX6eg)

[2016-06-29 20:03:26,374][DEBUG][cluster.service ] [Jonathan Richards] processing [cluster_reroute(async_shard_fetch)]: execute

[2016-06-29 20:03:26,375][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][4] found 1 allocations of [filebeat-2016.06.29][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:26,375][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][4]: allocating [[filebeat-2016.06.29][4], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,375][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][3] found 1 allocations of [filebeat-2016.06.29][3], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:26,376][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][3]: allocating [[filebeat-2016.06.29][3], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,376][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][1] found 1 allocations of [filebeat-2016.06.29][1], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:26,376][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][1]: allocating [[filebeat-2016.06.29][1], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,376][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2] found 1 allocations of [filebeat-2016.06.29][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:26,383][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2]: throttling allocation [[filebeat-2016.06.29][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [[{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}]] on primary allocation

[2016-06-29 20:03:26,383][DEBUG][gateway ] [Jonathan Richards] [.kibana][0] found 1 allocations of [.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [40]

[2016-06-29 20:03:26,384][DEBUG][gateway ] [Jonathan Richards] [.kibana][0]: throttling allocation [[.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [[{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}]] on primary allocation

[2016-06-29 20:03:26,384][DEBUG][gateway ] [Jonathan Richards] [searchguard][0] found 1 allocations of [searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]], highest version: [44]

[2016-06-29 20:03:26,384][DEBUG][gateway ] [Jonathan Richards] [searchguard][0]: throttling allocation [[searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]] to [[{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}]] on primary allocation

[2016-06-29 20:03:26,385][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [4], source [cluster_reroute(async_shard_fetch)]

[2016-06-29 20:03:26,385][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [4]

[2016-06-29 20:03:26,394][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 4

[2016-06-29 20:03:26,395][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29][1] creating shard

[2016-06-29 20:03:26,396][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][1] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/1], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/1]

[2016-06-29 20:03:26,396][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][1] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/1, indexUUID=‘n3aFwaqZTCuHhvUsOrjMxg’, shard=[filebeat-2016.06.29][1]}]

[2016-06-29 20:03:26,396][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] creating shard_id [filebeat-2016.06.29][1]

[2016-06-29 20:03:26,397][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:26,394][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [1] active shards, each shard set to indexing=[101.5mb], translog=[64kb]

[2016-06-29 20:03:26,398][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,421][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29][1] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:26,422][DEBUG][index.deletionpolicy ] [Jonathan Richards] [filebeat-2016.06.29][1] Using [keep_only_last] deletion policy

[2016-06-29 20:03:26,427][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:26,427][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] state: [CREATED]

[2016-06-29 20:03:26,427][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][1] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:26,428][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:26,428][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] starting recovery from shard_store …

[2016-06-29 20:03:26,433][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29][4] creating shard

[2016-06-29 20:03:26,433][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][4] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/4], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/4]

[2016-06-29 20:03:26,433][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][4] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/4, indexUUID=‘n3aFwaqZTCuHhvUsOrjMxg’, shard=[filebeat-2016.06.29][4]}]

[2016-06-29 20:03:26,433][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] creating shard_id [filebeat-2016.06.29][4]

[2016-06-29 20:03:26,434][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:26,451][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [2] active shards, each shard set to indexing=[50.7mb], translog=[64kb]

[2016-06-29 20:03:26,451][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,451][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,475][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29][4] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:26,476][DEBUG][index.deletionpolicy ] [Jonathan Richards] [filebeat-2016.06.29][4] Using [keep_only_last] deletion policy

[2016-06-29 20:03:26,476][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:26,476][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] state: [CREATED]

[2016-06-29 20:03:26,477][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][4] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:26,477][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:26,487][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] starting recovery from shard_store …

[2016-06-29 20:03:26,491][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29][3] creating shard

[2016-06-29 20:03:26,491][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][3] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/3], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/3]

[2016-06-29 20:03:26,491][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][3] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/3, indexUUID=‘n3aFwaqZTCuHhvUsOrjMxg’, shard=[filebeat-2016.06.29][3]}]

[2016-06-29 20:03:26,491][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] creating shard_id [filebeat-2016.06.29][3]

[2016-06-29 20:03:26,492][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:26,496][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [3] active shards, each shard set to indexing=[33.8mb], translog=[64kb]

[2016-06-29 20:03:26,496][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,496][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,496][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,538][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29][3] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:26,538][DEBUG][index.deletionpolicy ] [Jonathan Richards] [filebeat-2016.06.29][3] Using [keep_only_last] deletion policy

[2016-06-29 20:03:26,549][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:26,549][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] state: [CREATED]

[2016-06-29 20:03:26,550][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][3] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:26,550][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:26,557][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] starting recovery from shard_store …

[2016-06-29 20:03:26,564][DEBUG][cluster.service ] [Jonathan Richards] processing [cluster_reroute(async_shard_fetch)]: took 189ms done applying updated cluster_state (version: 4, uuid: YXiT-K_EQTG_iBMblb9Bbw)

[2016-06-29 20:03:26,565][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [4] active shards, each shard set to indexing=[25.3mb], translog=[64kb]

[2016-06-29 20:03:26,565][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,565][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,565][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,565][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,574][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][4] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 19}

[2016-06-29 20:03:26,573][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][0] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 20}

[2016-06-29 20:03:26,579][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][1] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 19}

[2016-06-29 20:03:26,617][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][3] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 18}

[2016-06-29 20:03:26,762][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] scheduling refresher every 1s

[2016-06-29 20:03:26,767][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] scheduling refresher every 1s

[2016-06-29 20:03:26,764][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] scheduling refresher every 1s

[2016-06-29 20:03:26,766][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] scheduling refresher every 1s

[2016-06-29 20:03:26,775][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:26,775][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] recovery completed from [shard_store], took [224ms]

[2016-06-29 20:03:26,776][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] sending shard started for target shard [[filebeat-2016.06.29][3], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=hU2UcYmNRpOIBDGBIRVFaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,777][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][3], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=hU2UcYmNRpOIBDGBIRVFaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,777][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][3], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=hU2UcYmNRpOIBDGBIRVFaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]: execute

[2016-06-29 20:03:26,778][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2] found 1 allocations of [filebeat-2016.06.29][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [24]

[2016-06-29 20:03:26,778][DEBUG][gateway ] [Jonathan Richards] [filebeat-2016.06.29][2]: allocating [[filebeat-2016.06.29][2], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,779][DEBUG][gateway ] [Jonathan Richards] [.kibana][0] found 1 allocations of [.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [40]

[2016-06-29 20:03:26,779][DEBUG][gateway ] [Jonathan Richards] [.kibana][0]: throttling allocation [[.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [[{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}]] on primary allocation

[2016-06-29 20:03:26,779][DEBUG][gateway ] [Jonathan Richards] [searchguard][0] found 1 allocations of [searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]], highest version: [44]

[2016-06-29 20:03:26,779][DEBUG][gateway ] [Jonathan Richards] [searchguard][0]: throttling allocation [[searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]] to [[{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}]] on primary allocation

[2016-06-29 20:03:26,787][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:26,787][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] recovery completed from [shard_store], took [309ms]

[2016-06-29 20:03:26,787][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:26,787][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] recovery completed from [shard_store], took [359ms]

[2016-06-29 20:03:26,787][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] sending shard started for target shard [[filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,788][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,788][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:26,788][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] recovery completed from [shard_store], took [420ms]

[2016-06-29 20:03:26,788][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] sending shard started for target shard [[filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,789][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,787][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] sending shard started for target shard [[filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,789][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,790][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [5], source [shard-started ([filebeat-2016.06.29][3], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=hU2UcYmNRpOIBDGBIRVFaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]

[2016-06-29 20:03:26,790][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [5]

[2016-06-29 20:03:26,790][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 5

[2016-06-29 20:03:26,790][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] sending shard started for target shard [[filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,791][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,791][DEBUG][indices.cluster ] [Jonathan Richards] [filebeat-2016.06.29][2] creating shard

[2016-06-29 20:03:26,795][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][2] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/2], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/2]

[2016-06-29 20:03:26,795][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] [filebeat-2016.06.29][2] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/filebeat-2016.06.29/2, indexUUID=‘n3aFwaqZTCuHhvUsOrjMxg’, shard=[filebeat-2016.06.29][2]}]

[2016-06-29 20:03:26,795][DEBUG][index ] [Jonathan Richards] [filebeat-2016.06.29] creating shard_id [filebeat-2016.06.29][2]

[2016-06-29 20:03:26,796][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:26,822][DEBUG][index.store ] [Jonathan Richards] [filebeat-2016.06.29][2] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:26,822][DEBUG][index.deletionpolicy ] [Jonathan Richards] [filebeat-2016.06.29][2] Using [keep_only_last] deletion policy

[2016-06-29 20:03:26,827][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:26,827][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] state: [CREATED]

[2016-06-29 20:03:26,827][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][2] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:26,828][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:26,830][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] starting recovery from shard_store …

[2016-06-29 20:03:26,833][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] sending shard started for target shard [[filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,833][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,840][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:26,841][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [5] active shards, each shard set to indexing=[20.3mb], translog=[64kb]

[2016-06-29 20:03:26,843][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updating index_buffer_size from [25.3mb] to [20.3mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:26,843][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updating index_buffer_size from [25.3mb] to [20.3mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:26,843][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:26,844][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] updating index_buffer_size from [25.3mb] to [20.3mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:26,844][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] updating index_buffer_size from [25.3mb] to [20.3mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:26,866][DEBUG][index.translog ] [Jonathan Richards] [filebeat-2016.06.29][2] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 19}

[2016-06-29 20:03:26,881][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] sending shard started for target shard [[filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,881][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:26,886][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][3], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=hU2UcYmNRpOIBDGBIRVFaw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]: took 108ms done applying updated cluster_state (version: 5, uuid: gk4O7GBITpCDPiAw9BQdIw)

[2016-06-29 20:03:26,886][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: execute

[2016-06-29 20:03:26,886][DEBUG][gateway ] [Jonathan Richards] [.kibana][0] found 1 allocations of [.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]], highest version: [40]

[2016-06-29 20:03:26,891][DEBUG][gateway ] [Jonathan Richards] [.kibana][0]: allocating [[.kibana][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,891][DEBUG][gateway ] [Jonathan Richards] [searchguard][0] found 1 allocations of [searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]], highest version: [44]

[2016-06-29 20:03:26,891][DEBUG][gateway ] [Jonathan Richards] [searchguard][0]: allocating [[searchguard][0], node[null], [P], v[0], s[UNASSIGNED], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]] to [{Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300}] on primary allocation

[2016-06-29 20:03:26,895][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [6], source [shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]

[2016-06-29 20:03:26,895][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [6]

[2016-06-29 20:03:26,895][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 6

[2016-06-29 20:03:26,896][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] creating index

[2016-06-29 20:03:26,896][DEBUG][indices ] [Jonathan Richards] creating Index [.kibana], shards [1]/[1]

[2016-06-29 20:03:26,913][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] scheduling refresher every 1s

[2016-06-29 20:03:26,922][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:26,922][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] recovery completed from [shard_store], took [93ms]

[2016-06-29 20:03:26,922][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] sending shard started for target shard [[filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,922][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [after recovery from store]

[2016-06-29 20:03:26,934][DEBUG][index.store ] [Jonathan Richards] [.kibana] using index.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]

[2016-06-29 20:03:26,942][DEBUG][index.mapper ] [Jonathan Richards] [.kibana] using dynamic[true]

[2016-06-29 20:03:26,943][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] creating index

[2016-06-29 20:03:26,943][DEBUG][indices ] [Jonathan Richards] creating Index [searchguard], shards [1]/[0]

[2016-06-29 20:03:26,975][DEBUG][index.store ] [Jonathan Richards] [searchguard] using index.store.throttle.type [none], with index.store.throttle.max_bytes_per_sec [0b]

[2016-06-29 20:03:26,981][DEBUG][index.mapper ] [Jonathan Richards] [searchguard] using dynamic[true]

[2016-06-29 20:03:26,982][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] adding mapping [rolesmapping], source [{“rolesmapping”:{“properties”:{“sg_all_access”:{“properties”:{“users”:{“type”:“string”}}},“sg_kibana4”:{“properties”:{“users”:{“type”:“string”}}},“sg_kibana4_server”:{“properties”:{“users”:{“type”:“string”}}},“sg_logstash”:{“properties”:{“users”:{“type”:“string”}}},“sg_public”:{“properties”:{“users”:{“type”:“string”}}},“sg_readall”:{“properties”:{“users”:{“type”:“string”}}},“sg_readonly_dlsfls”:{“properties”:{“users”:{“type”:“string”}}},“sg_role_klingons1”:{“properties”:{“backendroles”:{“type”:“string”},“hosts”:{“type”:“string”},“users”:{“type”:“string”}}},“sg_role_starfleet”:{“properties”:{“backendroles”:{“type”:“string”},“hosts”:{“type”:“string”},“users”:{“type”:“string”}}},“sg_role_starfleet_captains”:{“properties”:{“backendroles”:{“type”:“string”}}}}}}]

[2016-06-29 20:03:26,987][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] adding mapping [config], source [{“config”:{“properties”:{“searchguard”:{“properties”:{“dynamic”:{“properties”:{“authc”:{“properties”:{“basic_internal_auth_domain”:{“properties”:{“authentication_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}},“clientcert_auth_domain”:{“properties”:{“authentication_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}},“kerberos_auth_domain”:{“properties”:{“authentication_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“config”:{“properties”:{“acceptor_principal”:{“type”:“string”},“krb_debug”:{“type”:“boolean”},“strip_realm_from_principal”:{“type”:“boolean”}}},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}},“kibana_auth_domain”:{“properties”:{“authentication_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}},“ldap”:{“properties”:{“authentication_backend”:{“properties”:{“config”:{“properties”:{“enable_ssl”:{“type”:“boolean”},“enable_ssl_client_auth”:{“type”:“boolean”},“enable_start_tls”:{“type”:“boolean”},“hosts”:{“type”:“string”},“userbase”:{“type”:“string”},“usersearch”:{“type”:“string”},“verify_hostnames”:{“type”:“boolean”}}},“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}},“proxy_auth_domain”:{“properties”:{“authentication_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”},“http_authenticator”:{“properties”:{“challenge”:{“type”:“boolean”},“config”:{“properties”:{“roles_header”:{“type”:“string”},“user_header”:{“type”:“string”}}},“type”:{“type”:“string”}}},“order”:{“type”:“long”}}}}},“authz”:{“properties”:{“roles_from_another_ldap”:{“properties”:{“authorization_backend”:{“properties”:{“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”}}},“roles_from_myldap”:{“properties”:{“authorization_backend”:{“properties”:{“config”:{“properties”:{“enable_ssl”:{“type”:“boolean”},“enable_ssl_client_auth”:{“type”:“boolean”},“enable_start_tls”:{“type”:“boolean”},“hosts”:{“type”:“string”},“resolve_nested_roles”:{“type”:“boolean”},“rolebase”:{“type”:“string”},“rolename”:{“type”:“string”},“rolesearch”:{“type”:“string”},“userbase”:{“type”:“string”},“userrolename”:{“type”:“string”},“usersearch”:{“type”:“string”},“verify_hostnames”:{“type”:“boolean”}}},“type”:{“type”:“string”}}},“enabled”:{“type”:“boolean”}}}}},“http”:{“properties”:{“anonymous_auth_enabled”:{“type”:“boolean”},“xff”:{“properties”:{“enabled”:{“type”:“boolean”},“internalProxies”:{“type”:“string”},“proxiesHeader”:{“type”:“string”},“remoteIpHeader”:{“type”:“string”}}}}}}}}}}}}]

[2016-06-29 20:03:27,012][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] adding mapping [actiongroups], source [{“actiongroups”:{“properties”:{“ALL”:{“type”:“string”},“CLUSTER_ALL”:{“type”:“string”},“CLUSTER_MONITOR”:{“type”:“string”},“CREATE_INDEX”:{“type”:“string”},“CRUD”:{“type”:“string”},“DATA_ACCESS”:{“type”:“string”},“DELETE”:{“type”:“string”},“GET”:{“type”:“string”},“INDEX”:{“type”:“string”},“MANAGE”:{“type”:“string”},“MANAGE_ALIASES”:{“type”:“string”},“MONITOR”:{“type”:“string”},“READ”:{“type”:“string”},“SEARCH”:{“type”:“string”},“SUGGEST”:{“type”:“string”},“WRITE”:{“type”:“string”}}}}]

[2016-06-29 20:03:27,026][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] adding mapping [internalusers], source [{“internalusers”:{“properties”:{“admin”:{“properties”:{“hash”:{“type”:“string”}}},“dlsflsuser”:{“properties”:{“hash”:{“type”:“string”}}},“kibanaadmin”:{“properties”:{“hash”:{“type”:“string”}}},“kibanaro”:{“properties”:{“hash”:{“type”:“string”}}},“kibanaserver”:{“properties”:{“hash”:{“type”:“string”}}},“kirk”:{“properties”:{“hash”:{“type”:“string”},“roles”:{“type”:“string”}}},“logstash”:{“properties”:{“hash”:{“type”:“string”}}},“mister_picard”:{“properties”:{“hash”:{“type”:“string”},“username”:{“type”:“string”}}},“readall”:{“properties”:{“hash”:{“type”:“string”}}},“spock”:{“properties”:{“hash”:{“type”:“string”},“roles”:{“type”:“string”}}},“worf”:{“properties”:{“hash”:{“type”:“string”}}}}}}]

[2016-06-29 20:03:27,045][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard] adding mapping [roles], source [{“roles”:{“properties”:{“sg_all_access”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{““:{“properties”:{””:{“type”:“string”}}}}}}},“sg_kibana4”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{““:{“properties”:{””:{“type”:“string”}}},“?kibana”:{“properties”:{““:{“type”:“string”}}}}}}},“sg_kibana4_server”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{”?kibana":{“properties”:{"”:{“type”:“string”}}}}}}},“sg_logstash”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{“beat”:{“properties”:{“":{“type”:“string”}}},"logstash-”:{“properties”:{““:{“type”:“string”}}}}}}},“sg_readall”:{“properties”:{“indices”:{“properties”:{””:{“properties”:{““:{“type”:“string”}}}}}}},“sg_readonly_and_monitor”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{””:{“properties”:{““:{“type”:“string”}}}}}}},“sg_readonly_dlsfls”:{“properties”:{“indices”:{“properties”:{”/\S/”:{“properties”:{“":{“type”:“string”},“dls”:{“type”:“string”},“fls”:{“type”:“string”}}}}}}},“sg_role_starfleet”:{“properties”:{“indices”:{“properties”:{"pub”:{“properties”:{“":{“type”:“string”}}},“sf”:{“properties”:{“alumni”:{“type”:“string”},“public”:{“type”:“string”},“ships”:{“type”:“string”},“students”:{“type”:“string”}}}}}}},“sg_role_starfleet_captains”:{“properties”:{“cluster”:{“type”:“string”},“indices”:{“properties”:{"pub”:{“properties”:{““:{“type”:“string”}}},“sf”:{“properties”:{””:{“type”:“string”}}}}}}},“sg_transport_client”:{“properties”:{“cluster”:{“type”:“string”}}}}}}]

[2016-06-29 20:03:27,059][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] adding mapping [search], source [{“search”:{“properties”:{“columns”:{“type”:“string”},“description”:{“type”:“string”},“hits”:{“type”:“long”},“kibanaSavedObjectMeta”:{“properties”:{“searchSourceJSON”:{“type”:“string”}}},“sort”:{“type”:“string”},“title”:{“type”:“string”},“version”:{“type”:“long”}}}}]

[2016-06-29 20:03:27,062][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] adding mapping [config], source [{“config”:{“properties”:{“buildNum”:{“type”:“string”,“index”:“not_analyzed”}}}}]

[2016-06-29 20:03:27,072][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] adding mapping [visualization], source [{“visualization”:{“properties”:{“description”:{“type”:“string”},“kibanaSavedObjectMeta”:{“properties”:{“searchSourceJSON”:{“type”:“string”}}},“savedSearchId”:{“type”:“string”},“title”:{“type”:“string”},“uiStateJSON”:{“type”:“string”},“version”:{“type”:“long”},“visState”:{“type”:“string”}}}}]

[2016-06-29 20:03:27,084][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] adding mapping [dashboard], source [{“dashboard”:{“properties”:{“description”:{“type”:“string”},“hits”:{“type”:“long”},“kibanaSavedObjectMeta”:{“properties”:{“searchSourceJSON”:{“type”:“string”}}},“optionsJSON”:{“type”:“string”},“panelsJSON”:{“type”:“string”},“timeFrom”:{“type”:“string”},“timeRestore”:{“type”:“boolean”},“timeTo”:{“type”:“string”},“title”:{“type”:“string”},“uiStateJSON”:{“type”:“string”},“version”:{“type”:“long”}}}}]

[2016-06-29 20:03:27,092][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana] adding mapping [index-pattern], source [{“index-pattern”:{“properties”:{“fieldFormatMap”:{“type”:“string”},“fields”:{“type”:“string”},“intervalName”:{“type”:“string”},“notExpandable”:{“type”:“boolean”},“timeFieldName”:{“type”:“string”},“title”:{“type”:“string”}}}}]

[2016-06-29 20:03:27,102][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,108][DEBUG][cluster.action.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] sending shard started for target shard [[filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:27,108][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [n3aFwaqZTCuHhvUsOrjMxg], message [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]

[2016-06-29 20:03:27,109][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,114][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,119][DEBUG][indices.cluster ] [Jonathan Richards] [.kibana][0] creating shard

[2016-06-29 20:03:27,119][DEBUG][index ] [Jonathan Richards] [.kibana] [.kibana][0] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/.kibana/0], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/.kibana/0]

[2016-06-29 20:03:27,120][DEBUG][index ] [Jonathan Richards] [.kibana] [.kibana][0] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/.kibana/0, indexUUID=‘dJCgDwSIQ1u52wK9xGC4aQ’, shard=[.kibana][0]}]

[2016-06-29 20:03:27,120][DEBUG][index ] [Jonathan Richards] [.kibana] creating shard_id [.kibana][0]

[2016-06-29 20:03:27,121][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:27,134][DEBUG][index.store ] [Jonathan Richards] [.kibana][0] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:27,134][DEBUG][index.deletionpolicy ] [Jonathan Richards] [.kibana][0] Using [keep_only_last] deletion policy

[2016-06-29 20:03:27,135][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:27,135][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] state: [CREATED]

[2016-06-29 20:03:27,136][DEBUG][index.translog ] [Jonathan Richards] [.kibana][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:27,136][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:27,136][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] starting recovery from shard_store …

[2016-06-29 20:03:27,139][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [6] active shards, each shard set to indexing=[16.9mb], translog=[64kb]

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updating index_buffer_size from [20.3mb] to [16.9mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updating index_buffer_size from [20.3mb] to [16.9mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] updating index_buffer_size from [20.3mb] to [16.9mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] updating index_buffer_size from [20.3mb] to [16.9mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] updating index_buffer_size from [20.3mb] to [16.9mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,139][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:27,143][DEBUG][indices.cluster ] [Jonathan Richards] [searchguard][0] creating shard

[2016-06-29 20:03:27,143][DEBUG][index ] [Jonathan Richards] [searchguard] [searchguard][0] loaded data path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/searchguard/0], state path [/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/searchguard/0]

[2016-06-29 20:03:27,143][DEBUG][index ] [Jonathan Richards] [searchguard] [searchguard][0] creating using an existing path [ShardPath{path=/var/lib/elasticsearch/thrillshare_cluster1/nodes/0/indices/searchguard/0, indexUUID=‘Oi0zDvG-TvuK9yi8FgQ_Mw’, shard=[searchguard][0]}]

[2016-06-29 20:03:27,143][DEBUG][index ] [Jonathan Richards] [searchguard] creating shard_id [searchguard][0]

[2016-06-29 20:03:27,144][DEBUG][com.floragunn.searchguard.configuration.SearchGuardIndexSearcherWrapperModule] FLS/DLS not enabled

[2016-06-29 20:03:27,152][DEBUG][index.translog ] [Jonathan Richards] [.kibana][0] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 22}

[2016-06-29 20:03:27,157][DEBUG][index.store ] [Jonathan Richards] [searchguard][0] store stats are refreshed with refresh_interval [10s]

[2016-06-29 20:03:27,158][DEBUG][index.deletionpolicy ] [Jonathan Richards] [searchguard][0] Using [keep_only_last] deletion policy

[2016-06-29 20:03:27,159][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] using [tiered] merge mergePolicy with expunge_deletes_allowed[10.0], floor_segment[2mb], max_merge_at_once[10], max_merge_at_once_explicit[30], max_merged_segment[5gb], segments_per_tier[10.0], reclaim_deletes_weight[2.0]

[2016-06-29 20:03:27,159][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] state: [CREATED]

[2016-06-29 20:03:27,160][DEBUG][index.translog ] [Jonathan Richards] [searchguard][0] interval [5s], flush_threshold_ops [2147483647], flush_threshold_size [512mb], flush_threshold_period [30m]

[2016-06-29 20:03:27,161][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] state: [CREATED]->[RECOVERING], reason [from store]

[2016-06-29 20:03:27,161][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] starting recovery from shard_store …

[2016-06-29 20:03:27,163][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [7] active shards, each shard set to indexing=[14.5mb], translog=[64kb]

[2016-06-29 20:03:27,164][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][0] updating index_buffer_size from [16.9mb] to [14.5mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,164][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][1] updating index_buffer_size from [16.9mb] to [14.5mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,164][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] updating index_buffer_size from [16.9mb] to [14.5mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,165][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][3] updating index_buffer_size from [16.9mb] to [14.5mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,165][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][4] updating index_buffer_size from [16.9mb] to [14.5mb]; IndexWriter now using [0] bytes

[2016-06-29 20:03:27,165][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:27,165][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] updateBufferSize: engine is closed; skipping

[2016-06-29 20:03:27,171][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][1], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=WQ8xiO9SSk-neFBYLp79Kg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][4], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=BFffHYpIQ4uG90VaHfJ1tg], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started],shard-started ([filebeat-2016.06.29][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=3SzZacohR7uqfwcnvpXq8w], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: took 285ms done applying updated cluster_state (version: 6, uuid: XeK6W7wFQIe6TiNZ61SNMw)

[2016-06-29 20:03:27,171][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: execute

[2016-06-29 20:03:27,177][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [7], source [shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]

[2016-06-29 20:03:27,178][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [7]

[2016-06-29 20:03:27,178][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 7

[2016-06-29 20:03:27,187][DEBUG][index.shard ] [Jonathan Richards] [filebeat-2016.06.29][2] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,193][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] scheduling refresher every 1s

[2016-06-29 20:03:27,197][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store],shard-started ([filebeat-2016.06.29][2], node[chHrisWBTUeQDsxygsdXeg], [P], v[25], s[INITIALIZING], a[id=VfqhufR5Sn2MIHV1XFtpSw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [master {Jonathan Richards}{chHrisWBTUeQDsxygsdXeg}{127.0.0.1}{127.0.0.1:9300} marked shard as initializing, but shard state is [POST_RECOVERY], mark shard as started]]: took 26ms done applying updated cluster_state (version: 7, uuid: icUInOGxRJi4D8qrFw-WzA)

[2016-06-29 20:03:27,199][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:27,199][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] recovery completed from [shard_store], took [62ms]

[2016-06-29 20:03:27,199][DEBUG][cluster.action.shard ] [Jonathan Richards] [.kibana][0] sending shard started for target shard [[.kibana][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[41], s[INITIALIZING], a[id=B858lRxVQG6gvFQ05fmKbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [dJCgDwSIQ1u52wK9xGC4aQ], message [after recovery from store]

[2016-06-29 20:03:27,199][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[.kibana][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[41], s[INITIALIZING], a[id=B858lRxVQG6gvFQ05fmKbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]], indexUUID [dJCgDwSIQ1u52wK9xGC4aQ], message [after recovery from store]

[2016-06-29 20:03:27,199][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([.kibana][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[41], s[INITIALIZING], a[id=B858lRxVQG6gvFQ05fmKbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]: execute

[2016-06-29 20:03:27,201][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [8], source [shard-started ([.kibana][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[41], s[INITIALIZING], a[id=B858lRxVQG6gvFQ05fmKbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]

[2016-06-29 20:03:27,201][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [8]

[2016-06-29 20:03:27,201][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 8

[2016-06-29 20:03:27,202][DEBUG][index.shard ] [Jonathan Richards] [.kibana][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,202][DEBUG][index.translog ] [Jonathan Richards] [searchguard][0] open uncommitted translog checkpoint Checkpoint{offset=43, numOps=0, translogFileGeneration= 26}

[2016-06-29 20:03:27,208][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([.kibana][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[41], s[INITIALIZING], a[id=B858lRxVQG6gvFQ05fmKbw], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.894Z]]), reason [after recovery from store]]: took 8ms done applying updated cluster_state (version: 8, uuid: k8VvYERSTLSeKmFDn8Nt8Q)

[2016-06-29 20:03:27,229][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] scheduling refresher every 1s

[2016-06-29 20:03:27,229][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] state: [RECOVERING]->[POST_RECOVERY], reason [post recovery from shard_store]

[2016-06-29 20:03:27,229][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] recovery completed from [shard_store], took [68ms]

[2016-06-29 20:03:27,229][DEBUG][cluster.action.shard ] [Jonathan Richards] [searchguard][0] sending shard started for target shard [[searchguard][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[45], s[INITIALIZING], a[id=shFt9zEjQfO5sJDIjMT2IQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]], indexUUID [Oi0zDvG-TvuK9yi8FgQ_Mw], message [after recovery from store]

[2016-06-29 20:03:27,230][DEBUG][cluster.action.shard ] [Jonathan Richards] received shard started for target shard [[searchguard][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[45], s[INITIALIZING], a[id=shFt9zEjQfO5sJDIjMT2IQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]], indexUUID [Oi0zDvG-TvuK9yi8FgQ_Mw], message [after recovery from store]

[2016-06-29 20:03:27,230][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([searchguard][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[45], s[INITIALIZING], a[id=shFt9zEjQfO5sJDIjMT2IQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]), reason [after recovery from store]]: execute

[2016-06-29 20:03:27,231][INFO ][cluster.routing.allocation] [Jonathan Richards] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[searchguard][0]] …]).

[2016-06-29 20:03:27,232][DEBUG][cluster.service ] [Jonathan Richards] cluster state updated, version [9], source [shard-started ([searchguard][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[45], s[INITIALIZING], a[id=shFt9zEjQfO5sJDIjMT2IQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]), reason [after recovery from store]]

[2016-06-29 20:03:27,232][DEBUG][cluster.service ] [Jonathan Richards] publishing cluster state version [9]

[2016-06-29 20:03:27,232][DEBUG][cluster.service ] [Jonathan Richards] set local cluster state to version 9

[2016-06-29 20:03:27,232][DEBUG][index.shard ] [Jonathan Richards] [searchguard][0] state: [POST_RECOVERY]->[STARTED], reason [global state is [STARTED]]

[2016-06-29 20:03:27,233][DEBUG][com.floragunn.searchguard.configuration.ConfigurationService] [Jonathan Richards] Listener for primary shard [searchguard][0] added

[2016-06-29 20:03:27,240][DEBUG][cluster.service ] [Jonathan Richards] processing [shard-started ([searchguard][0], node[chHrisWBTUeQDsxygsdXeg], [P], v[45], s[INITIALIZING], a[id=shFt9zEjQfO5sJDIjMT2IQ], unassigned_info[[reason=CLUSTER_RECOVERED], at[2016-06-29T20:03:25.891Z]]), reason [after recovery from store]]: took 9ms done applying updated cluster_state (version: 9, uuid: irT7pAihRy22_awkyP3mZA)

[2016-06-29 20:03:27,266][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated rolesmapping for PrivilegesEvaluator

[2016-06-29 20:03:27,266][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated actiongroups for ActionGroupHolder

[2016-06-29 20:03:27,266][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated config for XFFResolver

[2016-06-29 20:03:27,272][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated config for BackendRegistry

[2016-06-29 20:03:27,272][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated internalusers for InternalAuthenticationBackend

[2016-06-29 20:03:27,272][DEBUG][com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction] [Jonathan Richards] Updated roles for PrivilegesEvaluator

[2016-06-29 20:03:44,537][WARN ][com.floragunn.searchguard.http.SearchGuardHttpServerTransport] [Jonathan Richards] Someone speaks plaintext instead of ssl, will close the channel

[2016-06-29 20:03:52,621][DEBUG][indices.memory ] [Jonathan Richards] recalculating shard indexing buffer, total is [101.5mb] with [7] active shards, each shard set to indexing=[14.5mb], translog=[64kb]

I also cannot curl into ES. I use curl with these flags
curl -k -XGET ‘https://127.0.0.1:9200

It does not return anything, and nothing appears on the log on info level, but these lines show up

[2016-06-29 20:08:09,849][DEBUG][netty.handler.ssl.SslHandler] [id: 0x7ebad988, /127.0.0.1:44267 => /127.0.0.1:9200] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
[2016-06-29 20:08:09,850][DEBUG][com.floragunn.searchguard.http.XFFResolver] xff resolve /127.0.0.1:44267
[2016-06-29 20:08:09,850][DEBUG][com.floragunn.searchguard.http.XFFResolver] no xff done true,false,class org.elasticsearch.http.netty.NettyHttpRequest,{}
[2016-06-29 20:08:09,851][DEBUG][com.floragunn.searchguard.auth.BackendRegistry] Try to extract auth creds from http basic

Thank you for your help

Can you try

output {
    elasticsearch {
       hosts => "https://127.0.0.1:9200"
       password => logstash
       ssl => true
       ssl_certificate_verification => false
       user => logstash
    }
}

Pls. post also your sg_config.yml

···

Am 29.06.2016 um 22:08 schrieb my@apptegy.com:

I also cannot curl into ES. I use curl with these flags
curl -k -XGET 'https://127.0.0.1:9200'

It does not return anything, and nothing appears on the log on info level, but these lines show up

[2016-06-29 20:08:09,849][DEBUG][netty.handler.ssl.SslHandler] [id: 0x7ebad988, /127.0.0.1:44267 => /127.0.0.1:9200] HANDSHAKEN: TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
[2016-06-29 20:08:09,850][DEBUG][com.floragunn.searchguard.http.XFFResolver] xff resolve /127.0.0.1:44267
[2016-06-29 20:08:09,850][DEBUG][com.floragunn.searchguard.http.XFFResolver] no xff done true,false,class org.elasticsearch.http.netty.NettyHttpRequest,{}
[2016-06-29 20:08:09,851][DEBUG][com.floragunn.searchguard.auth.BackendRegistry] Try to extract auth creds from http basic

Thank you for your help

--
You received this message because you are subscribed to the Google Groups "Search Guard" group.
To unsubscribe from this group and stop receiving emails from it, send an email to search-guard+unsubscribe@googlegroups.com.
To post to this group, send email to search-guard@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/search-guard/3f0c328c-f572-4982-93fb-af3a5d8ece0c%40googlegroups.com\.
For more options, visit https://groups.google.com/d/optout\.