1
1
[[disk-allocator]]
2
2
=== Disk-based Shard Allocation
3
3
4
- Elasticsearch factors in the available disk space on a node before deciding
5
- whether to allocate new shards to that node or to actively relocate shards
6
- away from that node.
4
+ Elasticsearch considers the available disk space on a node before deciding
5
+ whether to allocate new shards to that node or to actively relocate shards away
6
+ from that node.
7
7
8
8
Below are the settings that can be configured in the `elasticsearch.yml` config
9
9
file or updated dynamically on a live cluster with the
@@ -15,29 +15,33 @@ file or updated dynamically on a live cluster with the
15
15
16
16
`cluster.routing.allocation.disk.watermark.low`::
17
17
18
- Controls the low watermark for disk usage. It defaults to 85%, meaning ES will
19
- not allocate new shards to nodes once they have more than 85% disk used. It
20
- can also be set to an absolute byte value (like 500mb) to prevent ES from
21
- allocating shards if less than the configured amount of space is available.
18
+ Controls the low watermark for disk usage. It defaults to `85%`, meaning
19
+ that Elasticsearch will not allocate shards to nodes that have more than
20
+ 85% disk used. It can also be set to an absolute byte value (like `500mb`)
21
+ to prevent Elasticsearch from allocating shards if less than the specified
22
+ amount of space is available. This setting has no effect on the primary
23
+ shards of newly-created indices or, specifically, any shards that have
24
+ never previously been allocated.
22
25
23
26
`cluster.routing.allocation.disk.watermark.high`::
24
27
25
- Controls the high watermark. It defaults to 90%, meaning ES will attempt to
26
- relocate shards to another node if the node disk usage rises above 90%. It can
27
- also be set to an absolute byte value (similar to the low watermark) to
28
- relocate shards once less than the configured amount of space is available on
29
- the node.
28
+ Controls the high watermark. It defaults to `90%`, meaning that
29
+ Elasticsearch will attempt to relocate shards away from a node whose disk
30
+ usage is above 90%. It can also be set to an absolute byte value (similarly
31
+ to the low watermark) to relocate shards away from a node if it has less
32
+ than the specified amount of free space. This setting affects the
33
+ allocation of all shards, whether previously allocated or not.
30
34
31
35
`cluster.routing.allocation.disk.watermark.flood_stage`::
32
36
+
33
37
--
34
- Controls the flood stage watermark. It defaults to 95%, meaning ES enforces
35
- a read-only index block (`index.blocks.read_only_allow_delete`) on every
36
- index that has one or more shards allocated on the node that has at least
37
- one disk exceeding the flood stage. This is a last resort to prevent nodes
38
- from running out of disk space. The index block must be released manually
39
- once there is enough disk space available to allow indexing operations to
40
- continue.
38
+ Controls the flood stage watermark. It defaults to 95%, meaning that
39
+ Elasticsearch enforces a read-only index block
40
+ (` index.blocks.read_only_allow_delete`) on every index that has one or more
41
+ shards allocated on the node that has at least one disk exceeding the flood
42
+ stage. This is a last resort to prevent nodes from running out of disk space.
43
+ The index block must be released manually once there is enough disk space
44
+ available to allow indexing operations to continue.
41
45
42
46
NOTE: You can not mix the usage of percentage values and byte values within
43
47
these settings. Either all are set to percentage values, or all are set to byte
@@ -67,12 +71,12 @@ PUT /twitter/_settings
67
71
`cluster.routing.allocation.disk.include_relocations`::
68
72
69
73
Defaults to +true+, which means that Elasticsearch will take into account
70
- shards that are currently being relocated to the target node when computing a
71
- node's disk usage. Taking relocating shards' sizes into account may, however ,
72
- mean that the disk usage for a node is incorrectly estimated on the high side,
73
- since the relocation could be 90% complete and a recently retrieved disk usage
74
- would include the total size of the relocating shard as well as the space
75
- already used by the running relocation.
74
+ shards that are currently being relocated to the target node when computing
75
+ a node's disk usage. Taking relocating shards' sizes into account may,
76
+ however, mean that the disk usage for a node is incorrectly estimated on
77
+ the high side, since the relocation could be 90% complete and a recently
78
+ retrieved disk usage would include the total size of the relocating shard
79
+ as well as the space already used by the running relocation.
76
80
77
81
78
82
NOTE: Percentage values refer to used disk space, while byte values refer to
0 commit comments