-
Notifications
You must be signed in to change notification settings - Fork 25.2k
[ML] make xpack.ml.max_ml_node_size
and xpack.ml.use_auto_machine_memory_percent
dynamically settable
#66132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[ML] make xpack.ml.max_ml_node_size
and xpack.ml.use_auto_machine_memory_percent
dynamically settable
#66132
Conversation
Pinging @elastic/ml-core (:ml) |
We have developed autoscaling on the assumption that it will only be used in clusters where the ML nodes are dedicated ML nodes. Thus every ML node in the cluster should just have the But I can also see that the old code was problematic, as it was assuming that the value of this setting on the current master node applied to the ML nodes in the cluster, when in fact the appropriate value may not have been known at the time when the current master node was started. Another way to get around that would be to make the setting a dynamic cluster-wide setting. Then the cluster operator would set it appropriately via a call to the cluster settings endpoint given their knowledge of the infrastructure. |
xpack.ml.max_ml_node_size
and xpack.ml.use_auto_machine_memory_percent
dynamically settable
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
…memory_percent` dynamically settable (elastic#66132) With this commit the following settings are all dynamic: - `xpack.ml.max_ml_node_size` - `xpack.ml.use_auto_machine_memory_percent` - `xpack.ml.max_lazy_ml_nodes` Since all these settings could be easily interrelated, the ability to update a Cluster with a single settings call is useful. Additionally, setting some of these values at the node level (in a new node in a mixed cluster) it could cause issues with the master attempting to read the newer setting/value.
…memory_percent` dynamically settable (#66132) (#66270) With this commit the following settings are all dynamic: - `xpack.ml.max_ml_node_size` - `xpack.ml.use_auto_machine_memory_percent` - `xpack.ml.max_lazy_ml_nodes` Since all these settings could be easily interrelated, the ability to update a Cluster with a single settings call is useful. Additionally, setting some of these values at the node level (in a new node in a mixed cluster) it could cause issues with the master attempting to read the newer setting/value.
* elastic/master: (33 commits) Add searchable snapshot cache folder to NodeEnvironment (elastic#66297) [DOCS] Add dynamic runtime fields to docs (elastic#66194) Add HDFS searchable snapshot integration (elastic#66185) Support canceling cross-clusters search requests (elastic#66206) Mute testCacheSurviveRestart (elastic#66289) Fix cat tasks api params in spec and handler (elastic#66272) Snapshot of a searchable snapshot should be empty (elastic#66162) [ML] DFA _explain API should not fail when none field is included (elastic#66281) Add action to decommission legacy monitoring cluster alerts (elastic#64373) move rollup_index param out of RollupActionConfig (elastic#66139) Improve FieldFetcher retrieval of fields (elastic#66160) Remove unsed fields in `RestAnalyzeAction` (elastic#66215) Simplify searchable snapshot CacheKey (elastic#66263) Autoscaling remove feature flags (elastic#65973) Improve searchable snapshot mount time (elastic#66198) [ML] Report cause when datafeed extraction encounters error (elastic#66167) Remove suggest reference in some API specs (elastic#66180) Fix warning when installing a plugin for different ESversion (elastic#66146) [ML] make `xpack.ml.max_ml_node_size` and `xpack.ml.use_auto_machine_memory_percent` dynamically settable (elastic#66132) [DOCS] Add `require_alias` to Bulk API (elastic#66259) ...
With this commit the following settings are all dynamic:
xpack.ml.max_ml_node_size
xpack.ml.use_auto_machine_memory_percent
xpack.ml.max_lazy_ml_nodes
Since all these settings could be easily interrelated, the ability to update a Cluster with a single settings call is useful.
Additionally, setting some of these values at the node level (in a new node in a mixed cluster) it could cause issues with the master attempting to read the newer setting/value.