@@ -81,7 +81,8 @@ memory that {ml} may use for running analytics processes. (These processes are
81
81
separate to the {es} JVM.) Defaults to `30` percent. The limit is based on the
82
82
total memory of the machine, not current free memory. Jobs are not allocated to
83
83
a node if doing so would cause the estimated memory use of {ml} jobs to exceed
84
- the limit.
84
+ the limit. When the {operator-feature} is enabled, this setting can be updated
85
+ only by operator users.
85
86
86
87
`xpack.ml.max_model_memory_limit`::
87
88
(<<cluster-update-settings,Dynamic>>) The maximum `model_memory_limit` property
@@ -107,16 +108,18 @@ higher. The maximum permitted value is `512`.
107
108
(<<cluster-update-settings,Dynamic>>) The rate at which the nightly maintenance
108
109
task deletes expired model snapshots and results. The setting is a proxy to the
109
110
<<docs-delete-by-query-throttle,requests_per_second>> parameter used in the
110
- delete by query requests and controls throttling. Valid values must be greater
111
- than `0.0` or equal to `-1.0` where `-1.0` means a default value is used.
112
- Defaults to `-1.0`
111
+ delete by query requests and controls throttling. When the {operator-feature} is
112
+ enabled, this setting can be updated only by operator users. Valid values must
113
+ be greater than `0.0` or equal to `-1.0` where `-1.0` means a default value is
114
+ used. Defaults to `-1.0`
113
115
114
116
`xpack.ml.node_concurrent_job_allocations`::
115
117
(<<cluster-update-settings,Dynamic>>) The maximum number of jobs that can
116
118
concurrently be in the `opening` state on each node. Typically, jobs spend a
117
119
small amount of time in this state before they move to `open` state. Jobs that
118
120
must restore large models when they are opening spend more time in the `opening`
119
- state. Defaults to `2`.
121
+ state. When the {operator-feature} is enabled, this setting can be updated only
122
+ by operator users. Defaults to `2`.
120
123
121
124
[discrete]
122
125
[[advanced-ml-settings]]
@@ -126,7 +129,8 @@ These settings are for advanced use cases; the default values are generally
126
129
sufficient:
127
130
128
131
`xpack.ml.enable_config_migration`::
129
- (<<cluster-update-settings,Dynamic>>) Reserved.
132
+ (<<cluster-update-settings,Dynamic>>) Reserved. When the {operator-feature} is
133
+ enabled, this setting can be updated only by operator users.
130
134
131
135
`xpack.ml.max_anomaly_records`::
132
136
(<<cluster-update-settings,Dynamic>>) The maximum number of records that are
@@ -141,7 +145,8 @@ assumed that there are no more lazy nodes available as the desired number
141
145
of nodes have already been provisioned. If a job is opened and this setting has
142
146
a value greater than zero and there are no nodes that can accept the job, the
143
147
job stays in the `OPENING` state until a new {ml} node is added to the cluster
144
- and the job is assigned to run on that node.
148
+ and the job is assigned to run on that node. When the {operator-feature} is
149
+ enabled, this setting can be updated only by operator users.
145
150
+
146
151
IMPORTANT: This setting assumes some external process is capable of adding {ml}
147
152
nodes to the cluster. This setting is only useful when used in conjunction with
@@ -153,16 +158,25 @@ The maximum node size for {ml} nodes in a deployment that supports automatic
153
158
cluster scaling. Defaults to `0b`, which means this value is ignored. If you set
154
159
it to the maximum possible size of future {ml} nodes, when a {ml} job is
155
160
assigned to a lazy node it can check (and fail quickly) when scaling cannot
156
- support the size of the job.
161
+ support the size of the job. When the {operator-feature} is enabled, this
162
+ setting can be updated only by operator users.
163
+
164
+ `xpack.ml.persist_results_max_retries`::
165
+ (<<cluster-update-settings,Dynamic>>) The maximum number of times to retry bulk
166
+ indexing requests that fail while processing {ml} results. If the limit is
167
+ reached, the {ml} job stops processing data and its status is `failed`. When the
168
+ {operator-feature} is enabled, this setting can be updated only by operator
169
+ users. Defaults to `20`. The maximum value for this setting is `50`.
157
170
158
171
`xpack.ml.process_connect_timeout`::
159
172
(<<cluster-update-settings,Dynamic>>) The connection timeout for {ml} processes
160
173
that run separately from the {es} JVM. Defaults to `10s`. Some {ml} processing
161
174
is done by processes that run separately to the {es} JVM. When such processes
162
175
are started they must connect to the {es} JVM. If such a process does not
163
176
connect within the time period specified by this setting then the process is
164
- assumed to have failed. Defaults to `10s`. The minimum value for this setting is
165
- `5s`.
177
+ assumed to have failed. When the {operator-feature} is enabled, this setting can
178
+ be updated only by operator users. Defaults to `10s`. The minimum value for this
179
+ setting is `5s`.
166
180
167
181
xpack.ml.use_auto_machine_memory_percent::
168
182
(<<cluster-update-settings,Dynamic>>) If this setting is `true`, the
@@ -171,6 +185,8 @@ percentage of the machine's memory that can be used for running {ml} analytics
171
185
processes is calculated automatically and takes into account the total node size
172
186
and the size of the JVM on the node. The default value is `false`. If this
173
187
setting differs between nodes, the value on the current master node is heeded.
188
+ When the {operator-feature} is enabled, this setting can be updated only by
189
+ operator users.
174
190
+
175
191
TIP: If you do not have dedicated {ml} nodes (that is to say, the node has
176
192
multiple roles), do not enable this setting. Its calculations assume that {ml}
0 commit comments