Skip to content

Commit e466ed0

Browse files
committed
Add headings
1 parent 14194c6 commit e466ed0

File tree

1 file changed

+28
-24
lines changed

1 file changed

+28
-24
lines changed

docs/reference/modules/discovery/adding-removing-nodes.asciidoc

Lines changed: 28 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -3,18 +3,23 @@
33

44
As nodes are added or removed Elasticsearch maintains an optimal level of fault
55
tolerance by automatically updating the cluster's _voting configuration_, which
6-
is the set of <<master-node,master-eligible nodes>> whose responses are counted when making
7-
decisions such as electing a new master or committing a new cluster state.
6+
is the set of <<master-node,master-eligible nodes>> whose responses are counted
7+
when making decisions such as electing a new master or committing a new cluster
8+
state.
89

910
It is recommended to have a small and fixed number of master-eligible nodes in a
1011
cluster, and to scale the cluster up and down by adding and removing
1112
master-ineligible nodes only. However there are situations in which it may be
1213
desirable to add or remove some master-eligible nodes to or from a cluster.
1314

15+
==== Adding master-eligible nodes
16+
1417
If you wish to add some master-eligible nodes to your cluster, simply configure
1518
the new nodes to find the existing cluster and start them up. Elasticsearch will
1619
add the new nodes to the voting configuration if it is appropriate to do so.
1720

21+
==== Removing master-eligible nodes
22+
1823
When removing master-eligible nodes, it is important not to remove too many all
1924
at the same time. For instance, if there are currently seven master-eligible
2025
nodes and you wish to reduce this to three, it is not possible simply to stop
@@ -24,23 +29,22 @@ cannot take any further actions.
2429

2530
As long as there are at least three master-eligible nodes in the cluster, as a
2631
general rule it is best to remove nodes one-at-a-time, allowing enough time for
27-
the cluster to <<modules-discovery-quorums,auto-adjust>> the voting
32+
the cluster to <<modules-discovery-quorums,automatically adjust>> the voting
2833
configuration and adapt the fault tolerance level to the new set of nodes.
2934

3035
If there are only two master-eligible nodes remaining then neither node can be
31-
safely removed since both are required to reliably make progress. You must
32-
first inform Elasticsearch that one of the nodes should not be part of the
33-
voting configuration, and that the voting power should instead be given to
34-
other nodes. You can then take the excluded node offline without preventing
35-
the other node from making progress. A node which is added to a voting
36-
configuration exclusion list still works normally, but Elasticsearch
37-
tries to remove it from the voting configuration so its vote is no longer required.
38-
Importantly, Elasticsearch will never automatically move a node on the voting
39-
exclusions list back into the voting configuration. Once an excluded node has
40-
been successfully auto-reconfigured out of the voting configuration, it is safe
41-
to shut it down without affecting the cluster's master-level availability. A
42-
node can be added to the voting configuration exclusion list using the
43-
following API:
36+
safely removed since both are required to reliably make progress. You must first
37+
inform Elasticsearch that one of the nodes should not be part of the voting
38+
configuration, and that the voting power should instead be given to other nodes.
39+
You can then take the excluded node offline without preventing the other node
40+
from making progress. A node which is added to a voting configuration exclusion
41+
list still works normally, but Elasticsearch tries to remove it from the voting
42+
configuration so its vote is no longer required. Importantly, Elasticsearch
43+
will never automatically move a node on the voting exclusions list back into the
44+
voting configuration. Once an excluded node has been successfully
45+
auto-reconfigured out of the voting configuration, it is safe to shut it down
46+
without affecting the cluster's master-level availability. A node can be added
47+
to the voting configuration exclusion list using the following API:
4448

4549
[source,js]
4650
--------------------------------------------------
@@ -58,14 +62,14 @@ POST /_cluster/voting_config_exclusions/node_name?timeout=1m
5862

5963
The node that should be added to the exclusions list is specified using
6064
<<cluster-nodes,node filters>> in place of `node_name` here. If a call to the
61-
voting configuration exclusions API fails, you can safely retry it.
62-
Only a successful response guarantees that the node has actually been removed
63-
from the voting configuration and will not be reinstated.
65+
voting configuration exclusions API fails, you can safely retry it. Only a
66+
successful response guarantees that the node has actually been removed from the
67+
voting configuration and will not be reinstated.
6468

6569
Although the voting configuration exclusions API is most useful for down-scaling
6670
a two-node to a one-node cluster, it is also possible to use it to remove
67-
multiple master-eligible nodes all at the same time. Adding multiple nodes
68-
to the exclusions list has the system try to auto-reconfigure all of these nodes
71+
multiple master-eligible nodes all at the same time. Adding multiple nodes to
72+
the exclusions list has the system try to auto-reconfigure all of these nodes
6973
out of the voting configuration, allowing them to be safely shut down while
7074
keeping the cluster available. In the example described above, shrinking a
7175
seven-master-node cluster down to only have three master nodes, you could add
@@ -103,9 +107,9 @@ maintenance is complete. Clusters should have no voting configuration exclusions
103107
in normal operation.
104108

105109
If a node is excluded from the voting configuration because it is to be shut
106-
down permanently, its exclusion can be removed after it is shut down and
107-
removed from the cluster. Exclusions can also be cleared if they were
108-
created in error or were only required temporarily:
110+
down permanently, its exclusion can be removed after it is shut down and removed
111+
from the cluster. Exclusions can also be cleared if they were created in error
112+
or were only required temporarily:
109113

110114
[source,js]
111115
--------------------------------------------------

0 commit comments

Comments
 (0)