Error during config upload with sgadmin

Short description: unstable error during sgconfig work:


FAIL: 1 nodes reported failures. First failure is FailedNodeException[Failed node [hZm8tA43RUa5TcjRtleM9g]]; nested: RemoteTransportException[[project-logging-elasticsearch-client-676f8dbd5c-bkmmx][100.96.1.28:9300][cluster:admin/searchguard/config/update[n]]]; nested: ElasticsearchException[java.lang.NullPointerException: Cannot read field "dynamic" because "this.config" is null]; nested: NullPointerException[Cannot read field "dynamic" because "this.config" is null];

Elasticsearch version:
7.10.2

Server OS version:
ubuntu-18.04

Kibana version (if relevant):
7.10.2

Describe the issue:

We use bash script to initiate SearchGuard, do several applying like

    function applyConfig(){
            ... check if http://elk:9200/searchguard/_doc/${1} return 404 then apply config...

            /sgadmin/tools/sgadmin.sh -f /tmp/templates/${2} -t ${1} -icl -nhnv \
               -cacert /certificates/root-ca.pem \
               -cert /certificates/sgadmin.pem \
               -key /certificates/sgadmin.key \
               -h $ELASTICSEARCH_DISCOVERY_SERVICE && break

for each of type of config file:

    applyConfig "config" "sg_config.yml"
    applyConfig "roles" "sg_roles.yml"
    applyConfig "rolesmapping" "sg_roles_mapping.yml"
    applyConfig "internalusers" "sg_internal_users.yml"
    applyConfig "actiongroups" "sg_action_groups.yml"
    applyConfig "tenants" "sg_tenants.yml"
    applyConfig "blocks" "sg_blocks.yml"

Sometime (we got it two times per 10 test clusters) this fails and second attempt to run init bash script give us the following logs:


Elastic is ready
-------------
Apply config
-------------
Template config exists in searchguard index, skip upload
-------------
Apply roles
-------------
Template roles exists in searchguard index, skip upload
-------------
Apply rolesmapping
-------------
Apply initial template rolesmapping with sgadmin.sh
Search Guard Admin v7
Will connect to project-logging-elasticsearch-discovery.project:9300 ... done
Connected as CN=sgadmin,O=project
Elasticsearch Version: 7.10.2
Search Guard Version: 7.10.2-52.3.0
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: elasticsearch
Clusterstate: YELLOW
Number of nodes: 3
Number of data nodes: 1
searchguard index already exists, so we do not need to create one.
Populate config from /tmp
Force type: rolesmapping
Will update '_doc/rolesmapping' with /tmp/templates/sg_roles_mapping.yml 
   SUCC: Configuration for 'rolesmapping' created or updated   
RemoteTransportException[[project-logging-elasticsearch-client-676f8dbd5c-bkmmx][100.96.1.28:9300][cluster:admin/searchguard/config/update[n]]]; nested: ElasticsearchException[java.lang.NullPointerException: Cannot read field "dynamic" because "this.config" is null]; nested: NullPointerException[Cannot read field "dynamic" because "this.config" is null];
FAIL: Expected 3 nodes to return response, but got 2
Done with failures
Retry sgadmin.sh in 15 seconds.......


Then last error repeated until job timeout

Provide configuration:
elasticsearch/config/elasticsearch.yml


network.host: 0.0.0.0
cluster.name: elasticsearch
cluster.initial_master_nodes: -elasticsearch-master-0
discovery.seed_hosts: elasticsearch-discovery.project
elasticsearch-discovery.project
node.name: ${HOSTNAME}
node.processors: 2
searchguard.enterprise_modules_enabled: false
searchguard.ssl.http.clientauth_mode: OPTIONAL
searchguard.ssl.transport.pemcert_filepath: certificates/node.pem
searchguard.ssl.transport.pemkey_filepath: certificates/node.key
searchguard.ssl.transport.pemtrustedcas_filepath: certificates/root-ca.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.resolve_hostname: false
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: certificates/node_http.pem
searchguard.ssl.http.pemkey_filepath: certificates/node_http.key
searchguard.ssl.http.pemtrustedcas_filepath: certificates/root-ca.pem
searchguard.nodes_dn:
- CN=node,O=project
searchguard.authcz.admin_dn:
- CN=sgadmin,O=project
action.auto_create_index: true
xpack.ml.enabled: ${XPACK_ML_ENABLED:false}
xpack.monitoring.enabled: ${XPACK_MONITORING_ENABLED:true}
xpack.security.enabled: ${XPACK_SECURITY_ENABLED:false}
xpack.watcher.enabled: ${XPACK_WATCHER_ENABLED:false}

elasticsearch/plugins/search-guard-7/sgconfig/sg_config.yml


_sg_meta:
  type: "config"
  config_version: 2
sg_config:
  dynamic:
    filtered_alias_mode: "warn"
    disable_rest_auth: false
    disable_intertransport_auth: false
    respect_request_indices_options: false
    license: null
    auth_failure_listeners: {}
    do_not_fail_on_forbidden: true
    multi_rolespan_enabled: false
    hosts_resolver_mode: "ip-only"
    transport_userrname_attribute: null
    do_not_fail_on_forbidden_empty: true
    field_anonymization_salt2: null
    kibana:
      multitenancy_enabled: false
      server_username: "system.kibanaserver"
      index: ".kibana"
      rbac_enabled: false
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: true
        internalProxies: ".*"
        remoteIpHeader: "x-forwarded-for"
    authc:
      proxy_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 1
        http_authenticator:
          challenge: false
          type: "proxy"
          config:
            user_header: "x-proxy-user"
            roles_header: "x-proxy-roles"
        authentication_backend:
          type: "noop"
          config: {}
        skip_users: []
      clientcert_auth_domain:
        http_enabled: true
        transport_enabled: false
        order: 0
        http_authenticator:
          challenge: false
          type: "clientcert"
          config:
            username_attribute: "cn"
        authentication_backend:
          type: "noop"
          config: {}
        skip_users: []
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 2
        http_authenticator:
          challenge: true
          type: "basic"
          config: {}
        authentication_backend:
          type: "intern"
          config: {}
        skip_users: []
    authz:
      roles_from_another_ldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config: {}
        skipped_users: []
      roles_from_myldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config:
            enable_ssl: false
            enable_start_tls: false
            enable_ssl_client_auth: false
            verify_hostnames: true
            hosts:
            - "localhost:8389"
            bind_dn: null
            password: null
            rolebase: "ou=groups,dc=example,dc=com"
            rolesearch: "(member={0})"
            userroleattribute: null
            userrolename: "disabled"
            rolename: "cn"
            resolve_nested_roles: true
            userbase: "ou=people,dc=example,dc=com"
            usersearch: "(uid={0})"
        skipped_users: []

kibana/config/kibana.yml (if relevant)

Provide logs:
Elasticsearch
hard to repeat this case, we don’t have elasticsearch logs at this moment, only sgadmin logs

@trautw According to provided output, your cluster is in a yellow state.
Do you have all nodes up?

Could you share the output of _cat/nodes and _cluster/health APIs?

Hello.
I shared some outputs from other cluster, because of we can’t repeat that issue.
So, we always have yellow state.

GET _cat/nodes

100.96.2.33 13 80 7 0.73 0.70 0.72 i - project-logging-elasticsearch-client-d74c658bc-7kcnp
100.96.1.17 59 86 4 0.29 0.33 0.49 d - project-logging-elasticsearch-data-0
100.96.1.16 72 79 4 0.29 0.33 0.49 m * project-logging-elasticsearch-master-0

GET _cluster/health?level=indices

{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 3,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 12,
  "active_shards" : 12,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 70.58823529411765,
  "indices" : {
    "searchguard" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".signals_settings" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_watches" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".apm-custom-link" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".kibana_task_manager_1" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    "ilm-history-3-000001" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".signals_accounts" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".apm-agent-configuration" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".signals_watches_state" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_watches_trigger_state" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".kibana-event-log-7.10.2-000001" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".kibana_1" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    }
  }
}

GET _cat/indices

green open searchguard                     uaWFTGKpTzCIpDqxHzpxLg 1 0  7   4  54.8kb  54.8kb
green open .apm-custom-link                jABpltzoRc6OuL_00I2Ncw 1 0  0   0    208b    208b
green open .kibana_task_manager_1          LRP5hOlDQS6IwlaYGvUjMw 1 0  5 579 129.1kb 129.1kb
green open .apm-agent-configuration        q-wu693KRNK8xPaw38VD6g 1 0  0   0    208b    208b
green open .kibana_1                       fuAAG6SaS9y_9tLxzthh_g 1 0 54  11   2.1mb   2.1mb
green open .kibana-event-log-7.10.2-000001 tZ5T2qMYRC-umdtCfan-dQ 1 0  1   0   5.6kb   5.6kb

GET /_cluster/allocation/explain

{
  "index" : ".signals_watches_trigger_state",
  "shard" : 0,
  "primary" : false,
  "current_state" : "unassigned",
  "unassigned_info" : {
    "reason" : "INDEX_CREATED",
    "at" : "2022-02-07T10:03:15.540Z",
    "last_allocation_status" : "no_attempt"
  },
  "can_allocate" : "no",
  "allocate_explanation" : "cannot allocate because allocation is not permitted to any of the nodes",
  "node_allocation_decisions" : [
    {
      "node_id" : "WIqbyyvpRCSzg79vDVaF9w",
      "node_name" : "project-logging-elasticsearch-data-0",
      "transport_address" : "100.96.1.17:9300",
      "node_attributes" : {
        "xpack.installed" : "true",
        "transform.node" : "false"
      },
      "node_decision" : "no",
      "weight_ranking" : 1,
      "deciders" : [
        {
          "decider" : "same_shard",
          "decision" : "NO",
          "explanation" : "a copy of this shard is already allocated to this node [[.signals_watches_trigger_state][0], node[WIqbyyvpRCSzg79vDVaF9w], [P], s[STARTED], a[id=UE_k4PM8S8GC1QrIRvLJ2A]]"
        }
      ]
    }
  ]
}

We suppose as we run config upload command one by one, sometime nodes are not synchronized with each other. But we are here because of you know your code

Hello.
We have repeated case on production cluster.
The following config was updated on previous iteration of init job, and now it was skipped: config, roles, rolesmapping, internalusers.

Next we uploading actiongroups within command

            /sgadmin/tools/sgadmin.sh -f /tmp/templates/sg_action_groups.yml -t actiongroups -icl -nhnv \
               -cacert /certificates/root-ca.pem \
               -cert /certificates/sgadmin.pem \
               -key /certificates/sgadmin.key \
               -h $ELASTICSEARCH_DISCOVERY_SERVICE && break

And got repeated respone:

2022-02-22T12:07:22.892421837Z Search Guard Admin v7
2022-02-22T12:07:22.903168568Z Will connect to project-logging-elasticsearch-discovery.project:9300 ... done
2022-02-22T12:07:25.617178737Z Connected as CN=sgadmin,O=project
2022-02-22T12:07:25.736317399Z Elasticsearch Version: 7.10.2
2022-02-22T12:07:25.736662006Z Search Guard Version: 7.10.2-52.3.0
2022-02-22T12:07:25.736670406Z Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
2022-02-22T12:07:25.759131689Z Clustername: elasticsearch
2022-02-22T12:07:25.759162390Z Clusterstate: YELLOW
2022-02-22T12:07:25.759178790Z Number of nodes: 7
2022-02-22T12:07:25.759182191Z Number of data nodes: 1
2022-02-22T12:07:26.027577963Z searchguard index already exists, so we do not need to create one.
2022-02-22T12:07:26.035201527Z Populate config from /tmp
2022-02-22T12:07:26.035234827Z Force type: actiongroups
2022-02-22T12:07:26.037073867Z Will update '_doc/actiongroups' with /tmp/templates/sg_action_groups.yml 
2022-02-22T12:07:26.403932357Z    SUCC: Configuration for 'actiongroups' created or updated
2022-02-22T12:07:26.665741087Z FAIL: 7 nodes reported failures. First failure is FailedNodeException[Failed node [6egMmJVMSaSFTjBeyjoFLw]]; nested: RemoteTransportException[[project-logging-elasticsearch-master-2][100.96.2.48:9300][cluster:admin/searchguard/config/update[n]]]; nested: ElasticsearchException[java.lang.NullPointerException: Cannot read field "dynamic" because "this.config" is null]; nested: NullPointerException[Cannot read field "dynamic" because "this.config" is null];
2022-02-22T12:07:26.665774388Z FAIL: Expected 7 nodes to return response, but got 0
2022-02-22T12:07:26.665778688Z Done with failures

Please find collected information below


#_cluster/health
curl -k --cacert /certificates/root-ca.pem --key /certificates/sgadmin.key --cert /certificates/sgadmin.pem  "https://${ELASTICSEARCH_DISCOVERY_SERVICE}:9200/_cluster/health?local=true&pretty"
{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 7,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 6,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 54.54545454545454
}

#_cluster/health
curl -k --cacert /certificates/root-ca.pem --key /certificates/sgadmin.key --cert /certificates/sgadmin.pem  "https://${ELASTICSEARCH_DISCOVERY_SERVICE}:9200/_cluster/health?local=true&pretty&level=indices"

{
  "cluster_name" : "elasticsearch",
  "status" : "yellow",
  "timed_out" : false,
  "number_of_nodes" : 7,
  "number_of_data_nodes" : 1,
  "active_primary_shards" : 6,
  "active_shards" : 6,
  "relocating_shards" : 0,
  "initializing_shards" : 0,
  "unassigned_shards" : 5,
  "delayed_unassigned_shards" : 0,
  "number_of_pending_tasks" : 0,
  "number_of_in_flight_fetch" : 0,
  "task_max_waiting_in_queue_millis" : 0,
  "active_shards_percent_as_number" : 54.54545454545454,
  "indices" : {
    "searchguard" : {
      "status" : "green",
      "number_of_shards" : 1,
      "number_of_replicas" : 0,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 0
    },
    ".signals_settings" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_watches" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_accounts" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_watches_state" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    },
    ".signals_watches_trigger_state" : {
      "status" : "yellow",
      "number_of_shards" : 1,
      "number_of_replicas" : 1,
      "active_primary_shards" : 1,
      "active_shards" : 1,
      "relocating_shards" : 0,
      "initializing_shards" : 0,
      "unassigned_shards" : 1
    }
  }
}

#_cat/nodes
curl -k --cacert /certificates/root-ca.pem --key /certificates/sgadmin.key --cert /certificates/sgadmin.pem  "https://${ELASTICSEARCH_DISCOVERY_SERVICE}:9200/_cat/nodes?local=true&pretty"
100.96.2.48 52 75  5 0.06 0.37 0.59 m  - project-logging-elasticsearch-master-2
100.96.2.46  7 67  6 0.06 0.37 0.59 hs - project-logging-elasticsearch-data-hot-0
100.96.5.56 26 76 15 2.52 1.61 1.09 i  - project-logging-elasticsearch-client-744b96464c-tbkzq
100.96.5.61 26 75 15 2.52 1.61 1.09 m  - project-logging-elasticsearch-master-1
100.96.2.47 12 76  7 0.06 0.37 0.59 i  - project-logging-elasticsearch-client-744b96464c-m4xmd
100.96.0.52 42 77 12 0.38 0.54 0.75 i  - project-logging-elasticsearch-client-744b96464c-q6zbb
100.96.0.53 57 76 12 0.38 0.54 0.75 m  * project-logging-elasticsearch-master-0

#_cat/indices
curl -k --cacert /certificates/root-ca.pem --key /certificates/sgadmin.key --cert /certificates/sgadmin.pem  "https://${ELASTICSEARCH_DISCOVERY_SERVICE}:9200/_cat/indices?pretty"
green open searchguard DYu7FUVfRz2CyaT1ENTZwQ 1 0 5 56 50.4kb 50.4kb

We deleted all PVC of Elasticsearch and start again from the white list.

Connected as CN=sgadmin,O=kublr
Elasticsearch Version: 7.10.2
Search Guard Version: 7.10.2-52.3.0
Contacting elasticsearch cluster 'elasticsearch' and wait for YELLOW clusterstate ...
Clustername: elasticsearch
Clusterstate: YELLOW
Number of nodes: 7
Number of data nodes: 1
searchguard index already exists, so we do not need to create one.
Populate config from /tmp
Force type: roles
Will update '_doc/roles' with /tmp/templates/sg_roles.yml
SUCC: Configuration for 'roles' created or updated
FAIL: 2 nodes reported failures. First failure is FailedNodeException[Failed node [HzDlmnDTRbuJsRs_maunbw]]; nested: RemoteTransportException[[kublr-logging-elasticsearch-master-1][100.96.2.58:9300][cluster:admin/searchguard/config/update[n]]]; nested: ElasticsearchException[java.lang.NullPointerException: Cannot read field "dynamic" because "this.config" is null]; nested: NullPointerException[Cannot read field "dynamic" because "this.config" is null];
FAIL: Expected 7 nodes to return response, but got 5
Done with failures

Logs from elasticsearch:


stacktrace": ["java.lang.NullPointerException: Cannot read field \"dynamic\" because \"this.config\" is null
at com.floragunn.searchguard.sgconf.DynamicConfigModelV6.buildAAA(DynamicConfigModelV6.java:202) ~[search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.sgconf.DynamicConfigModelV6.<init>(DynamicConfigModelV6.java:64) ~[search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.sgconf.DynamicConfigFactory.onChange(DynamicConfigFactory.java:133) ~[search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.configuration.ConfigurationRepository.notifyAboutChanges(ConfigurationRepository.java:362) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.configuration.ConfigurationRepository.reloadConfiguration0(ConfigurationRepository.java:316) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.configuration.ConfigurationRepository.reloadConfiguration(ConfigurationRepository.java:299) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction.nodeOperation(TransportConfigUpdateAction.java:104) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.action.configupdate.TransportConfigUpdateAction.nodeOperation(TransportConfigUpdateAction.java:47) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:158) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:271) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:267) [elasticsearch-7.10.2.jar:7.10.2]
at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceivedDecorate(SearchGuardSSLRequestHandler.java:164) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.transport.SearchGuardRequestHandler.messageReceivedDecorate(SearchGuardRequestHandler.java:270) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.ssl.transport.SearchGuardSSLRequestHandler.messageReceived(SearchGuardSSLRequestHandler.java:136) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at com.floragunn.searchguard.SearchGuardPlugin$7.lambda$interceptHandler$0(SearchGuardPlugin.java:690) [search-guard-suite-security-7.10.2-52.3.0.jar:7.10.2-52.3.0]
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:72) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.transport.InboundHandler$RequestHandler.doRun(InboundHandler.java:305) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:743) [elasticsearch-7.10.2.jar:7.10.2]
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) [elasticsearch-7.10.2.jar:7.10.2]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) [?:?]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) [?:?]
at java.lang.Thread.run(Thread.java:832) [?:?]

"type": "server", "timestamp": "2022-02-22T13:30:56,251Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:56,252Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:56,252Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:59,256Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:59,257Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:59,259Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for rolesmapping while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:30:59,259Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for tenants while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:31:02,263Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for internalusers while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:31:02,264Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-logging-elasticsearch-master-2", "message": "No data for actiongroups while retrieving configuration for [INTERNALUSERS, ACTIONGROUPS, CONFIG, ROLES, ROLESMAPPING, TENANTS, BLOCKS] (index=searchguard and type=_doc)", "cluster.uuid": "zub0C6JCSUqQm5NfAmPljg", "node.id": "SNwkAoveTtCct7Poojk5ug" }
{"type": "server", "timestamp": "2022-02-22T13:31:02,265Z", "level": "ERROR", "component": "c.f.s.c.ConfigurationLoaderSG7", "cluster.name": "elasticsearch", "node.name": "kublr-

@trautw Could you share that sg_action_groups.yml file?

Here is sg_action_groups.yml:

    _sg_meta:
      type: "actiongroups"
      config_version: 2

Please take your attention first time we reported that problem there was rolesmapping. Also it usually works normally. Seems some error in sync mechanism

Thanks

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.