-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Make index creation more user-friendly #9126
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
+1 This will especially make integration testing less trappy! |
+1 this would help us out a lot |
+1 |
Reject sealing an index if no quorum, too? |
I don't think that's a good idea. The worst thing that happens if you do a synced_flush when one of the shards isn't around is that it doesn't effect them. Meaning that shard can't be restored quickly. There isn't really anything that can be done to get that shard to restore quickly regardless because its already offline. I wouldn't want to prevent speeding recovery on the other copies of the shard. And I think its safe because if, by some nasty turn of events, one of the down shards ends up coming back and being the master shard then the synced flush won't have any effect because it won't be on the master. |
@kimchy promised to work on this today |
This wait-for-quorum should be extended to the open-index API, eg see #12987 |
@clintongormley and I discussed this again and came up with a plan. There are two issues still left with index creation- the first is that an index creation move the cluster health status to
|
So now that we're testing snippets in the documentation this user-unfriendliness is leaking into the documentation. Which makes the issue pretty obvious. So I'd be pretty excited to have this fixed/make time to do it myself. |
Previously, index creation would momentarily cause the cluster health to go RED, because the primaries were still being assigned and activated. This commit ensures that when an index is created or an index is being recovered during cluster recovery and it does not have any active allocation ids, then the cluster health status will not go RED, but instead be YELLOW. Relates elastic#9126
If the allocation decision for a primary shard was NO, this should cause the cluster health for the shard to go RED, even if the shard belongs to a newly created index or is part of cluster recovery. Relates elastic#9126
Before returning, index creation now waits for the configured number of shard copies to be started. In the past, a client would create an index and then potentially have to check the cluster health to wait to execute write operations. With the cluster health semantics changing so that index creation does not cause the cluster health to go RED, this change enables waiting for the desired number of active shards to be active before returning from index creation. Relates elastic#9126
Previously, index creation would momentarily cause the cluster health to go RED, because the primaries were still being assigned and activated. This commit ensures that when an index is created or an index is being recovered during cluster recovery and it does not have any active allocation ids, then the cluster health status will not go RED, but instead be YELLOW. Relates #9126
If the allocation decision for a primary shard was NO, this should cause the cluster health for the shard to go RED, even if the shard belongs to a newly created index or is part of cluster recovery. Relates #9126
Before returning, index creation now waits for the configured number of shard copies to be started. In the past, a client would create an index and then potentially have to check the cluster health to wait to execute write operations. With the cluster health semantics changing so that index creation does not cause the cluster health to go RED, this change enables waiting for the desired number of active shards to be active before returning from index creation. Relates elastic#9126
Before returning, index creation now waits for the configured number of shard copies to be started. In the past, a client would create an index and then potentially have to check the cluster health to wait to execute write operations. With the cluster health semantics changing so that index creation does not cause the cluster health to go RED, this change enables waiting for the desired number of active shards to be active before returning from index creation. Relates elastic#9126
Before returning, index creation now waits for the configured number of shard copies to be started. In the past, a client would create an index and then potentially have to check the cluster health to wait to execute write operations. With the cluster health semantics changing so that index creation does not cause the cluster health to go RED, this change enables waiting for the desired number of active shards to be active before returning from index creation. Relates #9126
Closed by #19450 |
Just to check, this was released with 5.0.0, right? |
yes, the version label is on the linked PR #19450 that was used to close this issue. |
Today when we create an index we return immediately after executing sanity checks and adding metadata to the cluster-state. Yet, we don't wait for any kind of allocations etc. such that an index can be created with more replicas than nodes in the cluster and once it's closed it can't be reopened since reopening an index requires a quorum of the replicas for each shard. If such an index is reopened the shards that have no quorum / not enough replicas are found in the cluster will just not be allocated at all.
Unfortunately not even waiting for yellow will help here since it means waiting for the primary to be allocated which might not be enough in the case of
#replicas > 1
.There are a couple of things we can do here to improve the situation:
wait
to the cluster health to wait for quorumThe text was updated successfully, but these errors were encountered: