-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Enrich store should only update the policies via an update task. #41944
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enrich store should only update the policies via an update task. #41944
Conversation
Pinging @elastic/es-core-features |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would it make more sense to put the function last in the updateClusterState
method so that u did not have dangling args after it }, clusterService, handler);
? but LGTM for sure. Great catch!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry had one more Q :P
throw new ResourceNotFoundException("policy [{}] not found", name); | ||
} | ||
updateClusterState(clusterService, handler, current -> { | ||
final Map<String, EnrichPolicy> policies = getPolicies(current); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should the retrieval and throw happen outside of the task? The throw will just bubble up as an exception in the caller handler passed in to EnrichStore
whereas before it was a thrown exception.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have to check whether the policy exists in the update task, only here we can be certain about checks. We can also check it outside the update task, before we update the cluster state, but then we would check it twice.
The error will be caught by ClusterService and passed down to the handler (via ClusterStateUpdateTask's onFailure() method), so I think we're good here?
…e_under_update_thread
@elasticmachine run elasticsearch-ci/2 |
…e_under_update_thread
@elasticmachine run elasticsearch-ci/packaging-sample |
If delete or put policy requests would happen concurrently then updates may be missed without this change.