I have 3 master only nodes and 8 data only nodes running in AWS.
When the cluster is first initialised I get the following behaviour.
On one node I get:
{
“message” : null,
“mode” : “strict”,
“status” : “UP”
}
On the other two I get:
{
“message” : “Not initialized”,
“mode” : “strict”,
“status” : “DOWN”
}
However when I run:
curl -k -E admin.pem --key admin.key https://localhost:9200/_cluster/health?pretty
I get the same valid response on each master node:
{
“cluster_name” : “test”,
“status” : “green”,
“timed_out” : false,
“number_of_nodes” : 11,
“number_of_data_nodes” : 8,
“active_primary_shards” : 1,
“active_shards” : 8,
“relocating_shards” : 0,
“initializing_shards” : 0,
“unassigned_shards” : 0,
“delayed_unassigned_shards” : 0,
“number_of_pending_tasks” : 0,
“number_of_in_flight_fetch” : 0,
“task_max_waiting_in_queue_millis” : 0,
“active_shards_percent_as_number” : 100.0
}
I run this command when the servers start:
./sgadmin.sh -cd …/sg_config/ -nhnv -icl -cacert ./root-ca.pem -cert ./admin.pem -key ./admin.key
And I can see from each of the server logs that they have all run it.
If I reboot the servers or rerun the sgadmin line they all show as status: UP
Is there a race condition on first initialisation with multiple masters?
Is there some config I need to add to indicate to SG that it has multiple master nodes, some of which may not have connected yet.
···
ES + SG 6.4.0
Ubuntu 16.04
openjdk version “1.8.0_181”