Elasticsearch version: 7.8.0
Server OS version: Windows Server 2016
Kibana version (if relevant): 7.7-7.8
Browser version (if relevant):
Browser OS version (if relevant): Win/Mac
Describe the issue: After upgrading to ELK 7.7.0 last month I am no longer able to see monitoring data for me local cluster. I have opened an issue on the Kibana github ( Kibana 7.7.0 not displaying monitoring data · Issue #67688 · elastic/kibana · GitHub ), but I don’t have a way to rule out the possibility that changes to Searchguard could be the issue.
The oddity is that when I upgraded to 7.7.0 I was still able to view the monitoring data from my 7.6.0 indexes, but nothing from the 7.7.0 ones.
I have audit logging turned on but that hasn’t provided anything obvious, so I’m reaching out here to see if anyone has ideas of where to check.
Do you see any error in the elasticsearch log?
I need the following files to reproduce your issue:
kibana.yml
elasticsearch.yml
sg_config.yml
No errors related to searching; I was getting timeout on gathering statistics, but the hardware was under heavy load and I’m migrating to new hardware. I attempted writing the monitoring data to a host outside of the cluster, which appears to have taken the writes, but Kibana wouldn’t see that cluster, only the separate cluster created by having this extra node.
PS, sneaky side request, it’d be really nice to be able to use the builtin SIEM with Searchguard; I think there’s a dependency on xpack security that throws it for a loop.
PSS, I’m not able to see the Signals app in Kibana either; not sure why.
Kibana.yml:
logging.dest: C:\ELK\Kibana\Logs\Kibana.log
logging.timezone: "America/Chicago"
logging.rotate.enabled: true
server.host: "0.0.0.0"
server.name: "ES-WEB-01.my.domain"
server.maxPayloadBytes: 304857600
kibana.autocompleteTerminateAfter: 50000
###
# Disable unused XPack features
###
xpack.security.enabled: false
monitoring.enabled: true
monitoring.ui.elasticsearch.hosts:
- https://es-web-01.my.domain:9200
monitoring.elasticsearch.ssl.certificateAuthorities: "C:\\ELK\\Certificates\\es-index-chain.pem"
monitoring.ui.elasticsearch.ssl.verificationMode: none
monitoring.ui.elasticsearch.username: monitoring
monitoring.ui.elasticsearch.password: 'removed'
# 10 Minutes reporting timeout
xpack.reporting.queue.timeout: 1800000
# ~104MB
xpack.reporting.csv.maxSizeBytes: 304857600
xpack.encryptedSavedObjects.encryptionKey: 'removed'
elasticsearch.hosts: "https://es-web-01.my.domain:9200"
elasticsearch.username: "kibanaserver"
elasticsearch.password: "removed"
elasticsearch.ssl.verificationMode: none
elasticsearch.ssl.certificateAuthorities: "C:\\ELK\\Kibana\\Config\\CA.pem"
elasticsearch.requestTimeout: 150000
elasticsearch.shardTimeout: 150000
# Default to session cookie
searchguard.cookie.ttl: 0
# Increase session TTL to 8 hours (in ms)
searchguard.session.ttl: 28800000
# choose a non-default encryption password for cookies
searchguard.cookie.password: 'removed'
Elasticsearch.yml
bootstrap.memory_lock: true
cluster.initial_master_nodes:
- ES-DATA-01
- ES-DATA-02
- ES-DATA-03
- ES-DATA-04
- ES-DATA-05
- ES-DATA-06
- ES-WEB-01
cluster.name: CLUSTER
discovery.seed_hosts:
- ES-DATA-01
- ES-DATA-02
- ES-DATA-03
- ES-DATA-04
- ES-DATA-05
- ES-DATA-06
- ES-WEB-01
http.port: 9200
network.host: ES-WEB-01.my.domain
node.data: false
node.ingest: false
node.master: true
node.max_local_storage_nodes: 1
node.name: ES-WEB-01
node.voting_only: true
path.data: C:\ELK\Elasticsearch\Data
path.logs: C:\ELK\Elasticsearch\Logs
transport.tcp.port: 9300
xpack.license.self_generated.type: basic
xpack.security.enabled: false
gateway.recover_after_master_nodes: 2
indices.fielddata.cache.size: 1%
indices.memory.index_buffer_size: 5%
indices.queries.cache.size: 1%
indices.query.bool.max_clause_count: 8192
indices.recovery.max_bytes_per_sec: 500mb
indices.requests.cache.size: 1%
indices.breaker.total.limit: 95%
indices.breaker.fielddata.limit: 20%
indices.breaker.request.limit: 65%
network.breaker.inflight_requests.limit: 75%
indices.breaker.accounting.limit: 75%
node.ml: false
search.max_buckets: 100000
thread_pool.write.queue_size: 1000
transport.port: 9300
searchguard.audit.type: internal_elasticsearch
searchguard.audit.config.http_endpoints:
- es-index-01.my.domain:9200
- es-index-02.my.domain:9200
searchguard.audit.ignore_requests:
- "indices:data/read/*"
- "indices:data/read/scroll/*"
- "indices:data/write/*"
- "SearchRequest"
- "*_bulk"
- "indices:admin/template/get"
- "cluster:admin/ingest/pipeline/get"
- "cluster:admin/xpack/monitoring/bulk"
- "indices:admin/mapping/put"
searchguard.audit.config.enable_ssl: true
searchguard.audit.config.index: "'sg6-auditlog-'YYYY.MM.dd"
#searchguard.audit.config.disabled_rest_categories: NONE
searchguard.audit.config.disabled_transport_categories: NONE
searchguard.audit.config.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
searchguard.audit.config.username: sg_audit
searchguard.audit.config.password: REMOVED
searchguard.authcz.admin_dn:
- CN=SearchGuardAdmin.my.domain,REMOVED
searchguard.authcz.impersonation_dn:
"CN=searchguardadmin.my.domain,REMOVED":
- admin
searchguard.nodes_dn:
- CN=es-web-*,REMOVED
- CN=es-data-*,REMOVED
- CN=es-index-*,REMOVED
- CN=es-master-*,REMOVED
searchguard.restapi.roles_enabled:
- sg_all_access
- MY_rest_access
searchguard.ssl.http.enabled: true
searchguard.ssl.http.pemcert_filepath: C:\ELK\Elasticsearch\Config\es-web-cert.crt
searchguard.ssl.http.pemkey_filepath: C:\ELK\Elasticsearch\Config\es-web-private.pk8
searchguard.ssl.http.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
searchguard.ssl.transport.enforce_hostname_verification: false
searchguard.ssl.transport.pemcert_filepath: C:\ELK\Elasticsearch\Config\es-web-cert.crt
searchguard.ssl.transport.pemkey_filepath: C:\ELK\Elasticsearch\Config\es-web-private.pk8
searchguard.ssl.transport.pemtrustedcas_filepath: C:\ELK\Elasticsearch\Config\es-web-chain.pem
xpack.graph.enabled: false
xpack.logstash.enabled: false
xpack.ml.enabled: false
#xpack.monitoring.collection.enabled: true
#xpack.monitoring.enabled: true
#xpack.monitoring.exporters.cluster.auth.password: REMOVED
#xpack.monitoring.exporters.cluster.auth.username: monitoring
#xpack.monitoring.exporters.cluster.cluster_alerts.management.enabled: false
#xpack.monitoring.exporters.cluster.host:
# - https://es-index-01.my.domain:9200
# - https://es-index-02.my.domain:9200
#xpack.monitoring.exporters.cluster.ssl.certificate_authorities: C:\ELK\Elasticsearch\Config\InCommon_ECC_Server_CA.pem
#xpack.monitoring.exporters.cluster.type: http
xpack.monitoring.exporters.cluster-w1.type: local
xpack.watcher.enabled: false
sg_config.yml
---
_sg_meta:
type: "config"
config_version: 2
sg_config:
dynamic:
filtered_alias_mode: "warn"
disable_rest_auth: false
disable_intertransport_auth: false
respect_request_indices_options: false
license: "REMOVED"
kibana:
multitenancy_enabled: true
server_username: "kibanaserver"
index: ".kibana"
http:
anonymous_auth_enabled: false
xff:
enabled: false
internalProxies: "192\\.168\\.0\\.10|192\\.168\\.0\\.11"
remoteIpHeader: "x-forwarded-for"
authc:
jwt_auth_domain:
http_enabled: false
transport_enabled: false
order: 0
http_authenticator:
challenge: false
type: "jwt"
config:
jwt_header: "Authorization"
roles_key: null
signing_key: "base64 encoded HMAC key or public RSA/ECDSA pem key"
subject_key: null
jwt_url_parameter: null
authentication_backend:
type: "noop"
config: {}
description: "Migrated from v6"
ldap:
http_enabled: true
transport_enabled: false
order: 5
http_authenticator:
challenge: true
type: "basic"
config: {}
authentication_backend:
type: "ldap"
config:
bind_dn: "cd=AD ACCOUNT,REMOVED"
verify_hostnames: "true"
password: "REMOVED
usersearch: "(userPrincipalName={0})"
enable_ssl_client_auth: "false"
hosts:
- "ldap.my.domain:389"
username_attribute: "userPrincipalName"
userbase: "ou=users,REMOVED"
enable_start_tls: "false"
enable_ssl: "false"
description: "Migrated from v6"
basic_internal_auth_domain:
http_enabled: true
transport_enabled: true
order: 4
http_authenticator:
challenge: true
type: "basic"
config: {}
authentication_backend:
type: "intern"
config: {}
description: "Migrated from v6"
proxy_auth_domain:
http_enabled: false
transport_enabled: false
order: 3
http_authenticator:
challenge: false
type: "proxy"
config:
roles_header: "x-proxy-roles"
user_header: "x-proxy-user"
authentication_backend:
type: "noop"
config: {}
description: "Migrated from v6"
clientcert_auth_domain:
http_enabled: false
transport_enabled: false
order: 2
http_authenticator:
challenge: false
type: "clientcert"
config:
username_attribute: "cn"
authentication_backend:
type: "noop"
config: {}
description: "Migrated from v6"
kerberos_auth_domain:
http_enabled: false
transport_enabled: false
order: 6
http_authenticator:
challenge: true
type: "kerberos"
config:
strip_realm_from_principal: "true"
krb_debug: "false"
authentication_backend:
type: "noop"
config: {}
description: "Migrated from v6"
authz:
roles_from_another_ldap:
http_enabled: false
transport_enabled: false
authorization_backend:
type: "ldap"
config: {}
description: "Migrated from v6"
roles_from_myldap:
http_enabled: true
transport_enabled: false
authorization_backend:
type: "ldap"
config:
verify_hostnames: "true"
hosts:
- "ldap.my.domain:389"
bind_dn: "cn=AD Account, REMOVED"
password: "REMOVED"
userbase: "ou=users,REMOVED"
usersearch: "(sAMAccountName={0})"
username_attribute: "DistinguishedName"
rolebase: "ou=groups,REMOVED"
rolesearch: "(member={0})"
rolename: "cn"
enable_start_tls: "false"
enable_ssl: "false"
enable_ssl_client_auth: "false"
resolve_nested_roles: "false"
skip_users:
- "kibanaserver"
- "beats"
- "elastalert"
- "es-curator"
- "logstash"
- "monitoring"
- "admin"
description: "Migrated from v6"
auth_failure_listeners: {}
do_not_fail_on_forbidden: true
multi_rolespan_enabled: false
hosts_resolver_mode: "ip-only"
transport_userrname_attribute: null
do_not_fail_on_forbidden_empty: false
srgbnd
July 20, 2020, 8:19am
4
Hi @novaksam
I see you found the solution. Nice!
I had to increase the cluster_stats collection timeout:
PUT _cluster/settings
{
"persistent": {
"xpack": {
"monitoring": {
"collection": {
"cluster": {
"stats": {
"timeout": "30s"
}
}
}
}
}
}
}
So I can see my nodes again
opened 07:54PM - 28 May 20 UTC
closed 05:38PM - 08 Jul 20 UTC
triage_needed
Team:Monitoring
Feature:Stack Monitoring
**Kibana version:** 7.7.0
**Elasticsearch version:** 7.7.0 /w Searchguard
… **Server OS version:** Windows Server 2016
**Browser version:** Chrome 81.0.4044.138
**Browser OS version:** macOS 15.5.5
**Original install method (e.g. download page, yum, from source, etc.):** Downloaded zip, run as service using NSSM
**Describe the bug:** I upgraded from 7.6.0 to 7.7.0 yesterday, and I am unable to view monitoring data from indexes that have been created today in timeranges that only include today, but if I switch the time range to 24 hours I can view yesterday and today.
Looking at the requests, the calls to /api/monitoring/v1/clusters for today are returning:
```[]```
but for the last 24 hours:
``
[{"isSupported":true,"cluster_uuid":"lsypISfQSMeiHafYUr7g7w","cluster_name":"UWSP-ELK","version":"7.6.0","license":{"status":"active","type":"basic"},"elasticsearch":{"cluster_stats":{"indices":{"count":2493,"docs":{"count":25023665294,"deleted":2161693462},"shards":{"total":4926,"primaries":4794,"replication":0.02753441802252816,"index":{"shards":{"min":1,"max":10,"avg":1.9759326113116726},"primaries":{"min":1,"max":10,"avg":1.9229843561973525},"replication":{"min":0,"max":1,"avg":0.041716807059767345}}},"store":{"size_in_bytes":14117561236908}},"nodes":{"fs":{"total_in_bytes":18826693398528,"free_in_bytes":4553604153344,"available_in_bytes":4553604153344},"count":{"total":11},"jvm":{"max_uptime_in_millis":36032536,"mem":{"heap_used_in_bytes":67447545160,"heap_max_in_bytes":120259084288}}},"status":"green"}},"kibana":{},"isPrimary":false,"status":"green","isCcrEnabled":false}]
``
Checking to see if I had any .monitoring* indexes with a different cluster UUID didn't return any:
```
POST .monitoring-*-7-2020.05.28/_search
{
"query": {
"bool": {
"must_not": [
{
"term": {
"cluster_uuid": {
"value": "lsypISfQSMeiHafYUr7g7w"
}
}}
]
}
}
}
```
Produced:
```
{
"took" : 4481,
"timed_out" : false,
"_shards" : {
"total" : 4,
"successful" : 4,
"skipped" : 0,
"failed" : 0
},
"hits" : {
"total" : {
"value" : 0,
"relation" : "eq"
},
"max_score" : null,
"hits" : [ ]
}
}
```
In the monitoring app, I get the following dialog:
We checked the cluster defaults settings for xpack.monitoring.exporters, and found the reason: Remote exporters indicate a possible misconfiguration: cluster.
I'm using an http endpoint and cluster is the name of my exporter (which is actually my local cluster). The settings I was using in 7.6 do appear to be valid for 7.7, so I'm not sure why kibana isn't able to identify the cluster.
**Steps to reproduce:**
1. Log in to Kibana
2. Go to monitoring app
3. No monitoring data found
**Expected behavior:** Displaying monitoring data
**Screenshots (if relevant):** N/A
**Errors in browser console (if relevant): ** N/A
**Provide logs and/or server output (if relevant):** N/A
**Any additional context:**