Skip to content

Commit 80278bb

Browse files
authored
[DOCS] Add operator privileges to ML settings (#69766) (#69898)
1 parent 1e33867 commit 80278bb

File tree

1 file changed

+26
-10
lines changed

1 file changed

+26
-10
lines changed

docs/reference/settings/ml-settings.asciidoc

+26-10
Original file line numberDiff line numberDiff line change
@@ -81,7 +81,8 @@ memory that {ml} may use for running analytics processes. (These processes are
8181
separate to the {es} JVM.) Defaults to `30` percent. The limit is based on the
8282
total memory of the machine, not current free memory. Jobs are not allocated to
8383
a node if doing so would cause the estimated memory use of {ml} jobs to exceed
84-
the limit.
84+
the limit. When the {operator-feature} is enabled, this setting can be updated
85+
only by operator users.
8586

8687
`xpack.ml.max_model_memory_limit`::
8788
(<<cluster-update-settings,Dynamic>>) The maximum `model_memory_limit` property
@@ -107,16 +108,18 @@ higher. The maximum permitted value is `512`.
107108
(<<cluster-update-settings,Dynamic>>) The rate at which the nightly maintenance
108109
task deletes expired model snapshots and results. The setting is a proxy to the
109110
<<docs-delete-by-query-throttle,requests_per_second>> parameter used in the
110-
delete by query requests and controls throttling. Valid values must be greater
111-
than `0.0` or equal to `-1.0` where `-1.0` means a default value is used.
112-
Defaults to `-1.0`
111+
delete by query requests and controls throttling. When the {operator-feature} is
112+
enabled, this setting can be updated only by operator users. Valid values must
113+
be greater than `0.0` or equal to `-1.0` where `-1.0` means a default value is
114+
used. Defaults to `-1.0`
113115

114116
`xpack.ml.node_concurrent_job_allocations`::
115117
(<<cluster-update-settings,Dynamic>>) The maximum number of jobs that can
116118
concurrently be in the `opening` state on each node. Typically, jobs spend a
117119
small amount of time in this state before they move to `open` state. Jobs that
118120
must restore large models when they are opening spend more time in the `opening`
119-
state. Defaults to `2`.
121+
state. When the {operator-feature} is enabled, this setting can be updated only
122+
by operator users. Defaults to `2`.
120123

121124
[discrete]
122125
[[advanced-ml-settings]]
@@ -126,7 +129,8 @@ These settings are for advanced use cases; the default values are generally
126129
sufficient:
127130

128131
`xpack.ml.enable_config_migration`::
129-
(<<cluster-update-settings,Dynamic>>) Reserved.
132+
(<<cluster-update-settings,Dynamic>>) Reserved. When the {operator-feature} is
133+
enabled, this setting can be updated only by operator users.
130134

131135
`xpack.ml.max_anomaly_records`::
132136
(<<cluster-update-settings,Dynamic>>) The maximum number of records that are
@@ -141,7 +145,8 @@ assumed that there are no more lazy nodes available as the desired number
141145
of nodes have already been provisioned. If a job is opened and this setting has
142146
a value greater than zero and there are no nodes that can accept the job, the
143147
job stays in the `OPENING` state until a new {ml} node is added to the cluster
144-
and the job is assigned to run on that node.
148+
and the job is assigned to run on that node. When the {operator-feature} is
149+
enabled, this setting can be updated only by operator users.
145150
+
146151
IMPORTANT: This setting assumes some external process is capable of adding {ml}
147152
nodes to the cluster. This setting is only useful when used in conjunction with
@@ -153,16 +158,25 @@ The maximum node size for {ml} nodes in a deployment that supports automatic
153158
cluster scaling. Defaults to `0b`, which means this value is ignored. If you set
154159
it to the maximum possible size of future {ml} nodes, when a {ml} job is
155160
assigned to a lazy node it can check (and fail quickly) when scaling cannot
156-
support the size of the job.
161+
support the size of the job. When the {operator-feature} is enabled, this
162+
setting can be updated only by operator users.
163+
164+
`xpack.ml.persist_results_max_retries`::
165+
(<<cluster-update-settings,Dynamic>>) The maximum number of times to retry bulk
166+
indexing requests that fail while processing {ml} results. If the limit is
167+
reached, the {ml} job stops processing data and its status is `failed`. When the
168+
{operator-feature} is enabled, this setting can be updated only by operator
169+
users. Defaults to `20`. The maximum value for this setting is `50`.
157170

158171
`xpack.ml.process_connect_timeout`::
159172
(<<cluster-update-settings,Dynamic>>) The connection timeout for {ml} processes
160173
that run separately from the {es} JVM. Defaults to `10s`. Some {ml} processing
161174
is done by processes that run separately to the {es} JVM. When such processes
162175
are started they must connect to the {es} JVM. If such a process does not
163176
connect within the time period specified by this setting then the process is
164-
assumed to have failed. Defaults to `10s`. The minimum value for this setting is
165-
`5s`.
177+
assumed to have failed. When the {operator-feature} is enabled, this setting can
178+
be updated only by operator users. Defaults to `10s`. The minimum value for this
179+
setting is `5s`.
166180

167181
xpack.ml.use_auto_machine_memory_percent::
168182
(<<cluster-update-settings,Dynamic>>) If this setting is `true`, the
@@ -171,6 +185,8 @@ percentage of the machine's memory that can be used for running {ml} analytics
171185
processes is calculated automatically and takes into account the total node size
172186
and the size of the JVM on the node. The default value is `false`. If this
173187
setting differs between nodes, the value on the current master node is heeded.
188+
When the {operator-feature} is enabled, this setting can be updated only by
189+
operator users.
174190
+
175191
TIP: If you do not have dedicated {ml} nodes (that is to say, the node has
176192
multiple roles), do not enable this setting. Its calculations assume that {ml}

0 commit comments

Comments
 (0)