-
Notifications
You must be signed in to change notification settings - Fork 307
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Check pipeline shutdown status while waiting for valid connection or while issuing a bulk to ES #1119
Check pipeline shutdown status while waiting for valid connection or while issuing a bulk to ES #1119
Conversation
…while issuing a bulk to ES
…xecution_context&.pipeline&.shutdown_requested?
…hed by the pipeline
1f8222a
to
092c961
Compare
The tests for version |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested a few scenarios:
A) Writing to ES when the API key is revoked and the Pipeline is updated with new API Key: pipeline restarts correctly ✅
B) Writing to ES when ES goes down, binds to a different port, Pipeline is updated with new port: the pipeline never terminates 🛑
B) seems to fail as the ressureccionist thread never terminates, likely due to https://github.com/logstash-plugins/logstash-output-elasticsearch/blob/main/lib/logstash/outputs/elasticsearch/http_client/pool.rb#L129C11-L146
To cover the case From a local test, when the Elasticsearch is not reachable a can see these repeated log lines:
Update It's my fault, I've used |
The logstash-output-elasticsearch/lib/logstash/outputs/elasticsearch/http_client/pool.rb Line 91 in a0b4832
is invoked after the plugin has terminated his #multi_process from the LS's pipeline during shutdown_workers execution.
A solution to exit the resurrectionist loop would be to propagate the |
…down is requested
With commit 5e57c3a was introduced the abort of a batch in case no connections are available in the pool and the pipeline is shutting down. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, tested manually a few scenarios, ran the unit tests without the necessary patches, everything is working as expected.
Release notes
Introduce the ability to negatively acknowledge the batch under processing if the plugin is blocked in a retry-error-loop and a shutdown is requested.
What does this PR do?
Updates the wait_for_successful_connection method and safe_bulk to react to shutdown requests.
When the plugin is in a retry loop and Logstash it's running into provide the ability to negatively ACK the batch under processing and a pipeline shutdown was requested then the plugin terminate signalling that the batch under processing shouldn't be acknowledged, raising an AbortedBatchException.
Why is it important/What is the impact to the user?
Let the user to update the configuration of the plugin, when managed by CPM, resuming from a retry-loop due to bad configuration values (or expired credentials);
Checklist
[ ] I have made corresponding changes to the documentation[ ] I have made corresponding change to the default configuration files (and/or docker env variables)Author's Checklist
How to test this PR locally
This PR has to be tested in lockstep with the changes on Logstash core applied with elastic/logstash#14940
Checkout a branch which contains that change or a release if already published.
The test plan needs to create an Elasticsearch instance in Elastic cloud, recording the following data:
cloud_id
cloud_auth
api_key
Test plan
After created the Elastic deployment execute the following steps:
config/logstash.yml
file to connect to central pipeline management (CPM) in Elastic cloudbin/logstash
)api_key
such as removing last char of the key (remember somewhere the correct key)api_key
. Without this fix the previous lines continues and no shutdown and restart of the pipeline happens, With this change you should fine a line likeand the pipeline effectively reloads and is back to normal functioning.
Related issues
Use cases
As a user using the central pipeline management I want that once a credential is updated the pipeline running the plugin is effectively restarted with the updated values without manual intervention on the Logstash instance.
Screenshots
Logs