-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Index with no changes gets new sync_id
when becoming inactive.
#27838
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Labels
:Distributed Indexing/Engine
Anything around managing Lucene and the Translog in an open shard.
Comments
dnhatn
added a commit
to dnhatn/elasticsearch
that referenced
this issue
Mar 16, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes elastic#27838
I opened #29103 to propose an enhancement. |
dnhatn
added a commit
that referenced
this issue
Mar 16, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
dnhatn
added a commit
that referenced
this issue
Mar 17, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
dnhatn
added a commit
that referenced
this issue
Mar 17, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
dnhatn
added a commit
that referenced
this issue
Mar 20, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
dnhatn
added a commit
that referenced
this issue
Mar 20, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
dnhatn
added a commit
that referenced
this issue
Mar 20, 2018
Today the synced-flush always issues a new sync-id even though all shards haven't been changed since the last seal. This causes active shards to have different a sync-id from offline shards even though all were sealed and no writes since then. This commit adjusts not to renew sync-id if all active shards are sealed with the same sync-id. Closes #27838
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
:Distributed Indexing/Engine
Anything around managing Lucene and the Translog in an open shard.
Elasticsearch version (
bin/elasticsearch --version
):Version: 5.5.2, Build: b2f0c09/2017-08-14T12:33:14.154Z, JVM: 1.8.0_144
Plugins installed: []
JVM version (
java -version
):java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)
OS version (
uname -a
if on a Unix-like system):Linux xxx 3.13.0-33-generic #58-Ubuntu SMP Tue Jul 29 16:45:05 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
The problem is that restarting a node takes very long. It takes several hours to wait for a cluster to become green after restarting just one single node, even if there are no indexing and sync flush was successfully executed. The restarted node should be able to recover from local files since they are up to date, but it doesn't because the
sync_id
doesn't match.Expected behavior: After shutting down indexing and performing a successful synced flush it is expected that
sync_id
s stays the same.Actual behavior: After shutting down indexing and performing a successful synced flush the
sync_id
changes when the index becomes inactive.Steps to reproduce:
Provide logs (if relevant):
With debug logging enabled, something like this is printed:
The text was updated successfully, but these errors were encountered: