Hello there, I am new here can anyone explain the discussion.
It’s getting hard to understand. If you can brief a bit.
My be I can make some contribution.
Thanks in advance.
Hello there, I am new here can anyone explain the discussion.
It’s getting hard to understand. If you can brief a bit.
My be I can make some contribution.
Thanks in advance.
@Raj.Jikadra The version 2.0.0-rc is no longer available. Please use 2.0.0 as per SG documentation.
The latest helm charts support version 2.0.
@peter82 tested with SG 2.0.0 and ES 8.12.2. And signals are still skipping, with same setup as previous one on K8s with Helm.
@Raj.Jikadra How exactly did you verify the skip? Are you checking entries in the history of executions and notifications in the output client (Email, Slack, Webhook etc) or just the history of executions?
Try enabling TRACE logging of the Signals watches to verify if the observed watch is skipped.
PUT /_cluster/settings
{
"persistent": {
"logger.org.elasticsearch.discovery": "TRACE"
}
}
@pablo I verified it with execution history in .signals_log*, and also with output client (elasticsearch).
Also tried TRACE logging, but no logs regarding the watches which are skipped.
@Raj.Jikadra Could you try the following API for Signals TRACE logging?
PUT /_cluster/settings
{
"transient": {
"logger.com.floragunn.searchsupport.jobs": "trace"
}
}
Have you noticed any errors regarding writing the Signals watch execution to the .signals_log index?
@pablo Apologies for the delayed response.
I initially attempted to use TRACE
logging for Signals, but unfortunately, I couldn’t find any logs related to signals being skipped. However, while reviewing the release notes of Search Guard, I came across a parameter: signals.worker_threads.pool.max_size
(referenced in Search Guard FLX 3.0.0 Changelog).
By setting this parameter to a value higher than my signal count, I was able to resolve the issue in my current setup.
Thank you to everyone who contributed and participated in this discussion.