Cannot backup SG7

Hi, I’m trying to backup my SG7 config in Elasticsearch 7.10.1, but this happens. Any idea would be appreciated.

Also: this used to work! :slight_smile:

Search Guard Admin v7
Will connect to erfelastbd01.erfk.kozpont.otp:9300 … done
Connected as CN=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Elasticsearch Version: 7.10.1
Search Guard Version: 7.10.1-48.0.0
Contacting elasticsearch cluster ‘elasticsearch’ and wait for YELLOW clusterstate …
Clustername: dev
Clusterstate: YELLOW
Number of nodes: 11
Number of data nodes: 6
searchguard index already exists, so we do not need to create one.
Will retrieve ‘_doc/config’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_config.yml
FAIL: Get configuration for ‘config’ because it does not exist
Will retrieve ‘_doc/roles’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_roles.yml
FAIL: Get configuration for ‘roles’ because it does not exist
Will retrieve ‘_doc/rolesmapping’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_roles_mapping.yml
FAIL: Get configuration for ‘rolesmapping’ because it does not exist
Will retrieve ‘_doc/internalusers’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_internal_users.yml
FAIL: Get configuration for ‘internalusers’ because it does not exist
Will retrieve ‘_doc/actiongroups’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_action_groups.yml
FAIL: Get configuration for ‘actiongroups’ because it does not exist
Will retrieve ‘_doc/tenants’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_tenants.yml
FAIL: Get configuration for ‘tenants’ because it does not exist
Will retrieve ‘_doc/blocks’ into /root/searchguard_backup_2021-01-14T23:09Z/sg_blocks.yml
FAIL: Get configuration for ‘blocks’ because it does not exist

Hi. Please share the full command you executed to obtain this result. Also, share the Elasticsearch log.

Here’s the command I issued;
. /usr/share/elasticsearch/plugins/search-guard-7/tools/sgadmin.sh -backup /root/searchguard_backup_2021-01-14T23:09Z -cert test.pem -key test.key -cacert ca.pem -nhnv -icl -nrhn -h master01.example.com

And the debug log (anonymized):

[root@master01 ~]# cat dev.log 
[2021-01-21T08:21:45,486][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Check authdomain for rest internal/0 or 2 in total
[2021-01-21T08:21:45,486][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Rest user 'User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE]' is authenticated
[2021-01-21T08:21:45,486][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] sgtenant 'null'
[2021-01-21T08:21:45,486][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] ### evaluate permissions for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] on master01.example.com
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] action: indices:data/write/bulk (BulkRequest)
[2021-01-21T08:21:45,487][DEBUG][c.f.s.r.IndexResolverReplacer] [master01.example.com] Resolve aliases, indices and types from BulkRequest
[2021-01-21T08:21:45,487][DEBUG][c.f.s.r.IndexResolverReplacer] [master01.example.com] Resolved pattern [.monitoring-beats-7-2021.01.21] to [.monitoring-beats-7-2021.01.21]
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] requestedResolved : Resolved [aliases=[], indices=[.monitoring-beats-7-2021.01.21], allIndices=[.monitoring-beats-7-2021.01.21], types=[*], originalRequested=[.monitoring-beats-7-2021.01.21], remoteIndices=[]]
[2021-01-21T08:21:45,487][DEBUG][c.f.s.s.C.SgRoles        ] [master01.example.com] No masked fields found for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] in 2 sg roles
[2021-01-21T08:21:45,487][DEBUG][c.f.s.s.C.SgRoles        ] [master01.example.com] No fls or dls found for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] in 2 sg roles
[2021-01-21T08:21:45,487][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [master01.example.com] raw requestedTenant: 'null'
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] Result from privileges interceptor for cluster perm: null
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] Allowed because we have cluster permissions for indices:data/write/bulk
[2021-01-21T08:21:45,487][DEBUG][c.f.s.f.SearchGuardFilter] [master01.example.com] PrivEvalResponse [allowed=true, missingPrivileges=[], allowedFlsFields=null, maskedFields=null, queries=null]
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] ### evaluate permissions for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] on master01.example.com
[2021-01-21T08:21:45,487][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] action: indices:data/write/bulk[s] (BulkShardRequest)
[2021-01-21T08:21:45,487][DEBUG][c.f.s.r.IndexResolverReplacer] [master01.example.com] Resolve aliases, indices and types from BulkShardRequest
[2021-01-21T08:21:45,488][DEBUG][c.f.s.r.IndexResolverReplacer] [master01.example.com] Resolved pattern [.monitoring-beats-7-2021.01.21] to [.monitoring-beats-7-2021.01.21]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] requestedResolved : Resolved [aliases=[], indices=[.monitoring-beats-7-2021.01.21], allIndices=[.monitoring-beats-7-2021.01.21], types=[*], originalRequested=[.monitoring-beats-7-2021.01.21], remoteIndices=[]]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.s.C.SgRoles        ] [master01.example.com] No masked fields found for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] in 2 sg roles
[2021-01-21T08:21:45,488][DEBUG][c.f.s.s.C.SgRoles        ] [master01.example.com] No fls or dls found for User [name=someuser, backend_roles=[LIST_OF_ROLES_HERE] in 2 sg roles
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] Additional permissions required: [indices:data/write/index, indices:data/write/bulk[s]]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] requested [indices:data/write/index, indices:data/write/bulk[s]] from 192.168.111.119:35986
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] requested resolved indextypes: Resolved [aliases=[], indices=[.monitoring-beats-7-2021.01.21], allIndices=[.monitoring-beats-7-2021.01.21], types=[*], originalRequested=[.monitoring-beats-7-2021.01.21], remoteIndices=[]]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] sgr: [sg_all_access, sg_own_index]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.c.PrivilegesInterceptorImpl] [master01.example.com] raw requestedTenant: 'null'
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] Result from privileges interceptor: null
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] sgr2: [sg_all_access, sg_own_index]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.p.PrivilegesEvaluator] [master01.example.com] Allowed because we have all indices permissions for indices:data/write/bulk[s]
[2021-01-21T08:21:45,488][DEBUG][c.f.s.f.SearchGuardFilter] [master01.example.com] PrivEvalResponse [allowed=true, missingPrivileges=[indices:data/write/index, indices:data/write/bulk[s]], allowedFlsFields=null, maskedFields=null, queries=null]
[2021-01-21T08:21:45,900][DEBUG][c.f.s.j.c.IndexJobStateStore] [master01.example.com] acquireNextTriggers(noLaterThan = Thu Jan 21 08:22:15 CET 2021, maxCount = 1, timeWindow =0) for IndexJobStateStore [schedulerName=signals/ba, statusIndexName=.signals_watches_trigger_state, jobConfigSource=IndexJobConfigSource [indexName=.signals_watches, jobFactory=com.floragunn.signals.watch.Watch$JobConfigFactory@4a4d8b67, jobDistributor=JobDistributor signals/ba], jobFactory=com.floragunn.signals.watch.Watch$JobConfigFactory@4a4d8b67]
[2021-01-21T08:21:45,900][DEBUG][c.f.s.j.c.IndexJobStateStore] [master01.example.com] Number of active triggers: 0
[2021-01-21T08:21:45,900][DEBUG][o.q.c.QuartzSchedulerThread] [master01.example.com] batch acquisition of 0 triggers
[2021-01-21T08:21:47,059][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x8338225a, L:/192.168.111.111:9300 - R:/192.168.111.111:54406] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,075][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54406
[2021-01-21T08:21:47,081][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x8338225a, L:/192.168.111.111:9300 - R:/192.168.111.111:54406] Swallowing a harmless 'connection reset by peer / broken pipe' error that occurred while writing close_notify in response to the peer's close_notify
java.net.SocketException: Connection reset
	at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:367) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:398) ~[?:?]
	at org.elasticsearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:141) ~[transport-netty4-client-7.10.1.jar:7.10.1]
	at org.elasticsearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:126) ~[transport-netty4-client-7.10.1.jar:7.10.1]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:832) [?:?]
[2021-01-21T08:21:47,138][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xd381041d, L:/192.168.111.111:9300 - R:/192.168.111.111:54412] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,139][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xd4414c3c, L:/192.168.111.111:9300 - R:/192.168.111.111:54416] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,139][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x15f8fbc0, L:/192.168.111.111:9300 - R:/192.168.111.111:54422] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,139][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x0c6b20c7, L:/192.168.111.111:9300 - R:/192.168.111.111:54424] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,139][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xeef59560, L:/192.168.111.111:9300 - R:/192.168.111.111:54418] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,140][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x329940e3, L:/192.168.111.111:9300 - R:/192.168.111.111:54426] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,141][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x345c1c1a, L:/192.168.111.111:9300 - R:/192.168.111.111:54410] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,142][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x5f65bb23, L:/192.168.111.111:9300 - R:/192.168.111.111:54428] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,145][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x27b0dd1b, L:/192.168.111.111:9300 - R:/192.168.111.111:54430] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,145][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xc6a3cbf4, L:/192.168.111.111:9300 - R:/192.168.111.111:54420] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,147][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54426
[2021-01-21T08:21:47,147][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xbcdf819f, L:/192.168.111.111:9300 - R:/192.168.111.111:54414] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,151][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54428
[2021-01-21T08:21:47,154][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54430
[2021-01-21T08:21:47,157][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0xf9bcb6b1, L:/192.168.111.111:9300 - R:/192.168.111.111:54434] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,157][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x0a7f6eb6, L:/192.168.111.111:9300 - R:/192.168.111.111:54432] HANDSHAKEN: protocol:TLSv1.2 cipher suite:TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
[2021-01-21T08:21:47,207][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54432
[2021-01-21T08:21:47,252][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54434
[2021-01-21T08:21:47,257][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54424
[2021-01-21T08:21:47,284][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54426
[2021-01-21T08:21:47,293][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54428
[2021-01-21T08:21:47,314][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54430
[2021-01-21T08:21:47,317][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54432
[2021-01-21T08:21:47,319][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54434
[2021-01-21T08:21:47,322][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54424
[2021-01-21T08:21:47,324][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54426
[2021-01-21T08:21:47,326][DEBUG][c.f.s.a.BackendRegistry  ] [master01.example.com] Transport authentication request from 192.168.111.111:54428
[2021-01-21T08:21:47,329][DEBUG][i.n.h.s.SslHandler       ] [master01.example.com] [id: 0x329940e3, L:/192.168.111.111:9300 - R:/192.168.111.111:54426] Swallowing a harmless 'connection reset by peer / broken pipe' error that occurred while writing close_notify in response to the peer's close_notify
java.net.SocketException: Connection reset
	at sun.nio.ch.SocketChannelImpl.throwConnectionReset(SocketChannelImpl.java:367) ~[?:?]
	at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:398) ~[?:?]
	at org.elasticsearch.transport.CopyBytesSocketChannel.readFromSocketChannel(CopyBytesSocketChannel.java:141) ~[transport-netty4-client-7.10.1.jar:7.10.1]
	at org.elasticsearch.transport.CopyBytesSocketChannel.doReadBytes(CopyBytesSocketChannel.java:126) ~[transport-netty4-client-7.10.1.jar:7.10.1]
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:148) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:714) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:615) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:578) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:493) [netty-transport-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) [netty-common-4.1.49.Final.jar:4.1.49.Final]
	at java.lang.Thread.run(Thread.java:832) [?:?]
[root@master01 ~]#

Can you access Elasticsearch from the host where you trigger the backup command? Try the following commands.

Check if the ports are open.

nc -zv master01.example.com 9300
nc -zv master01.example.com 9200

Check if the certificates are valid.

curl -XGET "https://master01.example.com:9200/_searchguard/authinfo?pretty" \
   --key "test.key" \
   --cert "test.pem" \
   --cacert "ca.pem"

Check the cluster health.

curl -k -u admin:admin -X GET https://localhost:9200/_cluster/health?pretty

Also, send the following configurations

  • elasticsearch.yml
  • sg_config.yml

This topic was automatically closed 21 days after the last reply. New replies are no longer allowed.