Not able to see monitoring in Kibana after installing 7.7.0/SG41

Elasticsearch version: 7.8.0

Server OS version: Windows Server 2016

Kibana version (if relevant): 7.7-7.8

Browser version (if relevant):

Browser OS version (if relevant): Win/Mac

Describe the issue: After upgrading to ELK 7.7.0 last month I am no longer able to see monitoring data for me local cluster. I have opened an issue on the Kibana github ( https://github.com/elastic/kibana/issues/67688 ), but I don’t have a way to rule out the possibility that changes to Searchguard could be the issue.

The oddity is that when I upgraded to 7.7.0 I was still able to view the monitoring data from my 7.6.0 indexes, but nothing from the 7.7.0 ones.

I have audit logging turned on but that hasn’t provided anything obvious, so I’m reaching out here to see if anyone has ideas of where to check.

Do you see any error in the elasticsearch log?

I need the following files to reproduce your issue:

  • kibana.yml
  • elasticsearch.yml
  • sg_config.yml

No errors related to searching; I was getting timeout on gathering statistics, but the hardware was under heavy load and I’m migrating to new hardware. I attempted writing the monitoring data to a host outside of the cluster, which appears to have taken the writes, but Kibana wouldn’t see that cluster, only the separate cluster created by having this extra node.

PS, sneaky side request, it’d be really nice to be able to use the builtin SIEM with Searchguard; I think there’s a dependency on xpack security that throws it for a loop.

PSS, I’m not able to see the Signals app in Kibana either; not sure why.

Kibana.yml:

logging.dest: C:\ELK\Kibana\Logs\Kibana.log
logging.timezone: "America/Chicago"
logging.rotate.enabled: true
server.host: "0.0.0.0"
server.name: "ES-WEB-01.my.domain"
server.maxPayloadBytes: 304857600
kibana.autocompleteTerminateAfter: 50000
###
# Disable unused XPack features
###
xpack.security.enabled: false
monitoring.enabled: true
monitoring.ui.elasticsearch.hosts:
  - https://es-web-01.my.domain:9200
monitoring.elasticsearch.ssl.certificateAuthorities: "C:\\ELK\\Certificates\\es-index-chain.pem"
monitoring.ui.elasticsearch.ssl.verificationMode: none
monitoring.ui.elasticsearch.username: monitoring
monitoring.ui.elasticsearch.password: 'removed'
# 10 Minutes reporting timeout
xpack.reporting.queue.timeout: 1800000
# ~104MB
xpack.reporting.csv.maxSizeBytes: 304857600
xpack.encryptedSavedObjects.encryptionKey: 'removed'

elasticsearch.hosts: "https://es-web-01.my.domain:9200"
elasticsearch.username: "kibanaserver"
elasticsearch.password: "removed"
elasticsearch.ssl.verificationMode: none
elasticsearch.ssl.certificateAuthorities: "C:\\ELK\\Kibana\\Config\\CA.pem"
elasticsearch.requestTimeout: 150000
elasticsearch.shardTimeout: 150000

# Default to session cookie
searchguard.cookie.ttl: 0
# Increase session TTL to 8 hours (in ms)
searchguard.session.ttl: 28800000
# choose a non-default encryption password for cookies
searchguard.cookie.password: 'removed'

Elasticsearch.yml

bootstrap.memory_lock: true
cluster.initial_master_nodes:
  - ES-DATA-01
  - ES-DATA-02
  - ES-DATA-03
  - ES-DATA-04
  - ES-DATA-05
  - ES-DATA-06
  - ES-WEB-01
cluster.name: CLUSTER
discovery.seed_hosts:
  - ES-DATA-01
  - ES-DATA-02
  - ES-DATA-03
  - ES-DATA-04
  - ES-DATA-05
  - ES-DATA-06
  - ES-WEB-01
http.port: 9200
network.host: ES-WEB-01.my.domain
node.data: false
node.ingest: false
node.master: true
node.max_local_storage_nodes: 1
node.name: ES-WEB-01
node.voting_only: true
path.data: C:\ELK\Elasticsearch\Data
path.logs: C:\ELK\Elasticsearch\Logs
transport.tcp.port: 9300
xpack.license.self_generated.type: basic
xpack.security.enabled: false
gateway.recover_after_master_nodes: 2
indices.fielddata.cache.size: 1%
indices.memory.index_buffer_size: 5%
indices.queries.cache.size: 1%
indices.query.bool.max_clause_count: 8192
indices.recovery.max_bytes_per_sec: 500mb
indices.requests.cache.size: 1%
indices.breaker.total.limit: 95%
indices.breaker.fielddata.limit: 20%
indices.breaker.request.limit: 65%
network.breaker.inflight_requests.limit: 75%
indices.breaker.accounting.limit: 75%
node.ml: false
search.max_buckets: 100000
thread_pool.write.queue_size: 1000
transport.port: 9300
searchguard.audit.type: internal_elasticsearch
searchguard.audit.config.http_endpoints: 
  - es-index-01.my.domain:9200
  - es-index-02.my.domain:9200
searchguard.audit.ignore_requests:
  - "indices:data/read/*"
  - "indices:data/read/scroll/*"
  - "indices:data/write/*"
  - "SearchRequest"
  - "*_bulk"
  - "indices:admin/template/get"
  - "cluster:admin/ingest/pipeline/get"
  - "cluster:admin/xpack/monitoring/bulk"
  - "indices:admin/mapping/put"
searchguard.audit.config.enable_ssl: true
searchguard.audit.config.index: "'sg6-auditlog-'YYYY.MM.dd"
#searchguard.audit.config.disabled_rest_categories: NONE
searchguard.audit.config.disabled_transport_categories: NONE
searchguard.audit.config.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
searchguard.audit.config.username: sg_audit
searchguard.audit.config.password: REMOVED
searchguard.authcz.admin_dn:
  - CN=SearchGuardAdmin.my.domain,REMOVED
searchguard.authcz.impersonation_dn:
  "CN=searchguardadmin.my.domain,REMOVED":
    - admin
searchguard.nodes_dn:
  - CN=es-web-*,REMOVED
  - CN=es-data-*,REMOVED
  - CN=es-index-*,REMOVED
  - CN=es-master-*,REMOVED
searchguard.restapi.roles_enabled:
  - sg_all_access
  - MY_rest_access
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: C:\ELK\Elasticsearch\Config\es-web-cert.crt
searchguard.ssl.http.pemkey_filepath: C:\ELK\Elasticsearch\Config\es-web-private.pk8
searchguard.ssl.http.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.pemcert_filepath: C:\ELK\Elasticsearch\Config\es-web-cert.crt
searchguard.ssl.transport.pemkey_filepath: C:\ELK\Elasticsearch\Config\es-web-private.pk8
searchguard.ssl.transport.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
xpack.graph.enabled: false
xpack.logstash.enabled: false
xpack.ml.enabled: false
#xpack.monitoring.collection.enabled: true
#xpack.monitoring.enabled: true
#xpack.monitoring.exporters.cluster.auth.password: REMOVED
#xpack.monitoring.exporters.cluster.auth.username: monitoring
#xpack.monitoring.exporters.cluster.cluster_alerts.management.enabled: false
#xpack.monitoring.exporters.cluster.host:
#  - https://es-index-01.my.domain:9200
#  - https://es-index-02.my.domain:9200
#xpack.monitoring.exporters.cluster.ssl.certificate_authorities: C:\ELK\Elasticsearch\Config\InCommon_ECC_Server_CA.pem
#xpack.monitoring.exporters.cluster.type: http
xpack.monitoring.exporters.cluster-w1.type: local
xpack.watcher.enabled: false

sg_config.yml

---
_sg_meta:
  type: "config"
  config_version: 2
sg_config:
  dynamic:
    filtered_alias_mode: "warn"
    disable_rest_auth: false
    disable_intertransport_auth: false
    respect_request_indices_options: false
    license: "REMOVED"
    kibana:
      multitenancy_enabled: true
      server_username: "kibanaserver"
      index: ".kibana"
    http:
      anonymous_auth_enabled: false
      xff:
        enabled: false
        internalProxies: "192\\.168\\.0\\.10|192\\.168\\.0\\.11"
        remoteIpHeader: "x-forwarded-for"
    authc:
      jwt_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 0
        http_authenticator:
          challenge: false
          type: "jwt"
          config:
            jwt_header: "Authorization"
            roles_key: null
            signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
            subject_key: null
            jwt_url_parameter: null
        authentication_backend:
          type: "noop"
          config: {}
        description: "Migrated from v6"
      ldap:
        http_enabled: true
        transport_enabled: false
        order: 5
        http_authenticator:
          challenge: true
          type: "basic"
          config: {}
        authentication_backend:
          type: "ldap"
          config:
            bind_dn: "cd=AD ACCOUNT,REMOVED"
            verify_hostnames: "true"
            password: "REMOVED
            usersearch: "(userPrincipalName={0})"
            enable_ssl_client_auth: "false"
            hosts:
            - "ldap.my.domain:389"
            username_attribute: "userPrincipalName"
            userbase: "ou=users,REMOVED"
            enable_start_tls: "false"
            enable_ssl: "false"
        description: "Migrated from v6"
      basic_internal_auth_domain:
        http_enabled: true
        transport_enabled: true
        order: 4
        http_authenticator:
          challenge: true
          type: "basic"
          config: {}
        authentication_backend:
          type: "intern"
          config: {}
        description: "Migrated from v6"
      proxy_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 3
        http_authenticator:
          challenge: false
          type: "proxy"
          config:
            roles_header: "x-proxy-roles"
            user_header: "x-proxy-user"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Migrated from v6"
      clientcert_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 2
        http_authenticator:
          challenge: false
          type: "clientcert"
          config:
            username_attribute: "cn"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Migrated from v6"
      kerberos_auth_domain:
        http_enabled: false
        transport_enabled: false
        order: 6
        http_authenticator:
          challenge: true
          type: "kerberos"
          config:
            strip_realm_from_principal: "true"
            krb_debug: "false"
        authentication_backend:
          type: "noop"
          config: {}
        description: "Migrated from v6"
    authz:
      roles_from_another_ldap:
        http_enabled: false
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config: {}
        description: "Migrated from v6"
      roles_from_myldap:
        http_enabled: true
        transport_enabled: false
        authorization_backend:
          type: "ldap"
          config:
            verify_hostnames: "true"
            hosts:
            - "ldap.my.domain:389"
            bind_dn: "cn=AD Account, REMOVED"
            password: "REMOVED"
            userbase: "ou=users,REMOVED"
            usersearch: "(sAMAccountName={0})"
            username_attribute: "DistinguishedName"
            rolebase: "ou=groups,REMOVED"
            rolesearch: "(member={0})"
            rolename: "cn"
            enable_start_tls: "false"
            enable_ssl: "false"
            enable_ssl_client_auth: "false"
            resolve_nested_roles: "false"
            skip_users:
            - "kibanaserver"
            - "beats"
            - "elastalert"
            - "es-curator"
            - "logstash"
            - "monitoring"
            - "admin"
        description: "Migrated from v6"
    auth_failure_listeners: {}
    do_not_fail_on_forbidden: true
    multi_rolespan_enabled: false
    hosts_resolver_mode: "ip-only"
    transport_userrname_attribute: null
    do_not_fail_on_forbidden_empty: false

Hi @novaksam
I see you found the solution. Nice!

I had to increase the cluster_stats collection timeout:

PUT _cluster/settings
{
 "persistent": {
   "xpack": {
     "monitoring": {
       "collection": {
         "cluster": {
           "stats": {
             "timeout": "30s"
           }
         }
       }
     }
   }
 }
}

So I can see my nodes again