-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Limit index creation rate #20760
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Have we run into any situations where someone has actually hit this issue? I don't recall seeing any github issues about it before. |
I've seen it come through Elastic's support organization a few times. I expect this hasn't come up on github because Elasticsearch isn't the root cause of the issue. |
I think I'd rather limit the total number of indices/shards in a cluster than the creation rate. |
Discussed in FixItFriday. There are two issues here: creating too many indices and creating indices faster than the master can cope. We suggest adding two safeguards: max_shards_per_nodeThis setting would be checked on user actions like create index, restore snapshot, open index. If the total number of shards in the cluster is greater than We would default to a high number during 5.x (eg 1000), giving sysadmins the ability to set it to whatever makes sense for their cluster, and we can look at lowering this value for 6.0. max_concurrent_index_creationsThis would be a simple counter which counts the number of in-flight index creation requests. New requests which would cause the max to be exceeded would be rejected. The aim of this setting is not to queue up potentially thousands of index creations which could be caused by erroneously trying to create an index per document. Default eg 30 |
The |
We now have a limit on the number of shards per node in a cluster, thanks to #34892. I've marked this as In short, I think we can close this. |
We discussed this today and agreed to close this for the reasons I described above. |
5.0 introduces a per-node limit on the rate of inline script compilations that should help catch the anti-pattern of embedding script parameters in the scripts themselves. I wonder if it is worth adding a master-only limit on the rate of indexes created to catch situations where people accidentally misconfigure an input system and it ends up creating thousands of indexes in quick succession. Such a rate limit would cause indexing to fail with a useful error message, causing back pressure in any queueing system. I think this'd be better than just creating thousands of indexes as fast as we can.
Is this a good idea or a horrible idea?
The text was updated successfully, but these errors were encountered: