how can we automate the running process of sgadmin.sh file

When asking questions, please provide the following information:

  • Search Guard and Elasticsearch version - 6.3.2:23.2

  • Installed and used enterprise modules, if any - not yet

  • JVM version and operating system version- 1.8

Hi I have setup ELK using Kubernetes cluster.

and every time we make changes we have to run sgadmin.sh file to reflect those changes in whole cluster(all the data nodes).

is there any way we can automate the process of running sgadmin.sh file every time we make any change in sgconfig folder.

Hi, I am trying this also.
However, every time I delete elastic search pod, it will said: Search Guard not initialized (SG11)

I had to run /usr/share/elasticsearch/plugins/search-guard-6/tools/install_demo_configuration.sh again.

And, most important is: The old logs (index) in elastic search will be deleted /missing!

Here is my Dockerfile:


FROM [docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4](http://docker.elastic.co/elasticsearch/elasticsearch-oss:6.2.4)

RUN /usr/share/elasticsearch/bin/elasticsearch-plugin install -b com.floragunn:search-guard-6:6.2.4-23.0

RUN cd /usr/share/elasticsearch/plugins/search-guard-6/tools && chmod 755 ./install_demo_configuration.sh && ./install_demo_configuration.sh -y

RUN cd /usr/share/elasticsearch/plugins/search-guard-6/tools && ls -al && chmod +x *.sh

RUN sleep 1 && cat /usr/share/elasticsearch/config/elasticsearch.yml

Here is my image:

johnzheng/elasticsearchsg:0.0.2

@Manjushree Gokhale, could you also share yours? Let us resolve this together. Thanks

No, that is not possible. The whole concept of Search Guard configuration revolves around not having to have any config files on your nodes at all.

The SG configuration is stored in a Search Guard index on Elasticsearch directly, which allows for hot-reloading the configuration. Let’s assume you have a 100 node cluster where you add and remove nodes dynamically. And also assume your configuration files have different contents on the nodes. Which configuration should “win”? Or, let’s assume you make changes to the configuration files on one node, and then different changes on another node. Should the latter node overwrite the changes from the first node?

On a production system this is highly impractical, that’s why we do not rely on configuration files on the nodes. The shipped config in the sgconfig folder is really just for demo purposes.

You need to use sgadmin to upload any configuration changes. You can run sgadmin from any machine that has access to any (one) node in your cluster on transport port (default 9300). You do not necessarily run it on an actual node. Also, you just need to execute it once, on any node in the cluster, not on all nodes. sgadmin will read the configuration files and update the SG configuration index accordingly. Then, the changes are propagated in your cluster as any other index change would. So, monitoring configuration files for changes on the ES nodes does not really fit the SG config concept.

Let me know if this helps!

···

On Tuesday, November 27, 2018 at 7:49:26 AM UTC+1, Manjushree Gokhale wrote:

When asking questions, please provide the following information:

  • Search Guard and Elasticsearch version - 6.3.2:23.2
  • Installed and used enterprise modules, if any - not yet
  • JVM version and operating system version- 1.8

Hi I have setup ELK using Kubernetes cluster.

and every time we make changes we have to run sgadmin.sh file to reflect those changes in whole cluster(all the data nodes).

is there any way we can automate the process of running sgadmin.sh file every time we make any change in sgconfig folder.

Here is my steps to reproduce the issue, FYI.

Step 1: There are index logstash-2018.11.27 and logstash-2018.11.28

curl [http://localhost:9200/_cat/indices?v](http://localhost:9200/_cat/indices?v) -k -u admin:admin

green  open   searchguard

yellow open   logstash-2018.11.27

yellow open   logstash-2018.11.28

The kibana/ elastic search works well.

Step 2: I delete all elastic search pod in k8s, like below (Note that it is a very normal action in k8s)

kubectl
delete po elasticsearch-logging-0 -n kube-system

Step 3: After several mins, all new elastic search pods are created and running,


elasticsearch-logging-0                                1/1       Running   0        **5m**

elasticsearch-logging-1                                1/1       Running   0          1m

elasticsearch-logging-2                                1/1       Running   0          36s

Step 4: I noticed all elastic search pods are using the old disk (storage es data), that means data should not missing.

test1d@ip-172-31-17-153:~$ k get pvc -n ks

NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE

es-persistent-storage-elasticsearch-logging-0 Bound pvc-135185fa-f249-11e8-8917-0eff959d76b8 200Gi RWO default ** 14h**

es-persistent-storage-elasticsearch-logging-1 Bound pvc-378590bc-f249-11e8-8917-0eff959d76b8 200Gi RWO default 14h

es-persistent-storage-elasticsearch-logging-2 Bound pvc-fb2a8cbf-f249-11e8-8917-0eff959d76b8 200Gi RWO default 14h

Step 5: I try to query elastic search with curl, it said Search Guard not initialized


curl [http://localhost:9200/_cat/health](http://localhost:9200/_cat/health) -k -u admin:admin'

Search Guard not initialized (SG11). See [http://docs.search-guard.com/v6/sgadminUnable](http://docs.search-guard.com/v6/sgadminUnable)

Step 6: I run /bin/init_sg.sh for each elastic search node (sgadmin.sh)

Search Guard Admin v6

Will connect to localhost:9300 … done

Elasticsearch Version: 6.2.4

Search Guard Version: 6.2.4-23.0

Connected as CN=kirk,OU=client,O=client,L=Test,C=DE

Contacting elasticsearch cluster ‘elasticsearch’ and wait for YELLOW clusterstate …

Clustername: docker-cluster

Clusterstate: GREEN

Number of nodes: 1

Number of data nodes: 1

searchguard index does not exists, attempt to create it … done (0-all replicas)

Populate config from /usr/share/elasticsearch/config/sg

Will update ‘sg/config’ with config/sg/sg_config.yml

SUCC: Configuration for ‘config’ created or updated

Will update ‘sg/roles’ with config/sg/sg_roles.yml

SUCC: Configuration for ‘roles’ created or updated

Will update ‘sg/rolesmapping’ with config/sg/sg_roles_mapping.yml

SUCC: Configuration for ‘rolesmapping’ created or updated

Will update ‘sg/internalusers’ with config/sg/sg_internal_users.yml

SUCC: Configuration for ‘internalusers’ created or updated

Will update ‘sg/actiongroups’ with config/sg/sg_action_groups.yml

SUCC: Configuration for ‘actiongroups’ created or updated

Done with success

Step 7: Query the indices again, I found old logstash-2018.11.27 is missing. (I guess logstash-2018.11.28 is also missing, and re-created by fluentd again )

curl http://localhost:9200/_cat/indices?v -k -u admin:admin

health status index uuid pri rep docs.count docs.deleted store.size pri.store.size

green open searchguard T3RSzXscSDW-GDuyKQOJyA 1 0 0 5 30.7kb 30.7kb

yellow open logstash-2018.11.28 zNKJr8FMQyGrbi5lxRgmTQ 5 1 30 0 496.2kb 496.2kb

Step 8 Please see attached log screenshot (before and after elastic search pods deletion)

logs_beforeDeleteElasticSearchPod.png

logs_afterDeleteElasticSearchPod.png

Also attached files about:

**How I create elastic image? **elasticsearch_image.tar

**How I create kibana image? **Dockerfile_kibana_image.txt

How I deploy elastic search? es-statefulset.yaml

Dockerfile_kibana_image.txt (235 Bytes)

elasticsearch_image.tar (40 KB)

es-statefulset.yaml (2.52 KB)

I have tried to run sgadmin.sh at the run time using entrypoint or cmd command also have given sleep in sgadmin.sh still it is not working.

any updated your guys have for the same.

Hello All,

I would like to share some thought i have came across while doing this automation of sgadmin.sh file.

the setup i have is a ELK cluster in kubernetes. where i am using kubernetes config maps to inject sgconfig files(sg_role.yml,sg_internal_user.yml, etc.).

the need to run sgadmin.sh is to replicate or update the sgindex which is created in elasticsearch to map the new changes made in sgconfig so that those changes can be refereed at the run time.

but as every time if I make any change in sgconfig files which are made through kubernetes configmaps my deployment is automatically updated.

this means that i dont need to run sgadmin.sh coz my latest changes are already there.

hence i dont think in this scenario i need to automate the execution of sgadmin.sh as i will always have latest changes in my cluster reflected from configmaps at runtime.

if i need to improve upon anything do let me know

thank you.

Please have a look here: https://github.com/floragunncom/search-guard-helm

We deploy a small container which has a preinstalled standalone version of sgadmin to update the cluster.

This of course requires shell access to the pod. if you mount the configs from shared filesystem and enrich the pod watching this [1] and on change automatically executing sgadmin

maybe that would fix your issue?

[1] https://superuser.com/questions/181517/how-to-execute-a-command-whenever-a-file-changes

···

On Monday, 3 December 2018 08:14:45 UTC+1, Manjushree Gokhale wrote:

Hello All,

I would like to share some thought i have came across while doing this automation of sgadmin.sh file.

the setup i have is a ELK cluster in kubernetes. where i am using kubernetes config maps to inject sgconfig files(sg_role.yml,sg_internal_user.yml, etc.).

the need to run sgadmin.sh is to replicate or update the sgindex which is created in elasticsearch to map the new changes made in sgconfig so that those changes can be refereed at the run time.

but as every time if I make any change in sgconfig files which are made through kubernetes configmaps my deployment is automatically updated.

this means that i dont need to run sgadmin.sh coz my latest changes are already there.

hence i dont think in this scenario i need to automate the execution of sgadmin.sh as i will always have latest changes in my cluster reflected from configmaps at runtime.

if i need to improve upon anything do let me know

thank you.