You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Ensure no Watches are running after Watcher is stopped.
Watcher keeps track of which watches are currently running keyed by watcher name/id.
If a watch is currently running it will not run the same watch and will result in a
message : "Watch is already queued in thread pool" and a state: "not_executed_already_queued"
When Watcher is stopped, it will stop watcher (rejecting any new watches), but allow
the currently running watches to run to completion. Waiting for the currently running
watches to complete is done async to the stopping of Watcher. Meaning that Watcher will
report as fully stopped, but there is still a background thread waiting for all of the
Watches to finish before it removes the watch from it's list of currently running Watches.
The integration test start and stop watcher between each test. The goal to ensure a clean
state between tests. However, since Watcher can report "yes - I am stopped", but there
are still running Watches, the tests may bleed over into each other, especially on slow
machines. This can result in errors related to "Watch is already queued in thread pool"
and a state: "not_executed_already_queued", and is VERY difficult to reproduce.
This commit changes the waiting for Watches on stop/pause from an aysnc waiting, back to a
sync wait as it worked prior to elastic#30118. This help ensure that for testing testing scenario
the stop much more predictable, such that after fully stopped, no Watches are running.
This should have little impact if any on production code since Watcher isn't stopped/paused
too often and when it stop/pause it has the same behavior is the same, it will just run on
the calling thread, not a generic thread.
0 commit comments