-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[CI] FullClusterRestartIT.testRecovery fails #51640
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
:Distributed Indexing/Recovery
Anything around constructing a new shard, either from a local or a remote source.
>test-failure
Triaged test failures from CI
Comments
Pinging @elastic/es-distributed (:Distributed/Recovery) |
dnhatn
added a commit
that referenced
this issue
Jan 30, 2020
testRecovery relies on the fact that shards are not flushed on inactive. Our CI recently was too slow. It took more than 20 minutes to complete the full cluster restart suite. This slowness caused some shards of testRecovery were flushed on inactive. This commit increases the inactive time to 1h to reduce this noise. Closes #51640
dnhatn
added a commit
that referenced
this issue
Jan 30, 2020
testRecovery relies on the fact that shards are not flushed on inactive. Our CI recently was too slow. It took more than 20 minutes to complete the full cluster restart suite. This slowness caused some shards of testRecovery were flushed on inactive. This commit increases the inactive time to 1h to reduce this noise. Closes #51640
dnhatn
added a commit
that referenced
this issue
Jan 30, 2020
testRecovery relies on the fact that shards are not flushed on inactive. Our CI recently was too slow. It took more than 20 minutes to complete the full cluster restart suite. This slowness caused some shards of testRecovery were flushed on inactive. This commit increases the inactive time to 1h to reduce this noise. Closes #51640
dnhatn
added a commit
that referenced
this issue
Jan 30, 2020
testRecovery relies on the fact that shards are not flushed on inactive. Our CI recently was too slow. It took more than 20 minutes to complete the full cluster restart suite. This slowness caused some shards of testRecovery were flushed on inactive. This commit increases the inactive time to 1h to reduce this noise. Closes #51640
dnhatn
added a commit
that referenced
this issue
Jan 31, 2020
testRecovery relies on the fact that shards are not flushed on inactive. Our CI recently was too slow. It took more than 20 minutes to complete the full cluster restart suite. This slowness caused some shards of testRecovery were flushed on inactive. This commit increases the inactive time to 1h to reduce this noise. Closes #51640
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Indexing/Recovery
Anything around constructing a new shard, either from a local or a remote source.
>test-failure
Triaged test failures from CI
There are several old issue like #46712 that look a bit like this but all are closed so I'm opening this one for the team to check if this is of interes and/or related.
Could not reproduce locally with
Failure
The text was updated successfully, but these errors were encountered: