@@ -103,93 +103,3 @@ and then started again then it will automatically recover, such as during a
103
103
<<restart-upgrade,full cluster restart>>. There is no need to take any further
104
104
action with the APIs described here in these cases, because the set of master
105
105
nodes is not changing permanently.
106
-
107
- [float]
108
- ==== Automatic changes to the voting configuration
109
-
110
- Nodes may join or leave the cluster, and Elasticsearch reacts by automatically
111
- making corresponding changes to the voting configuration in order to ensure that
112
- the cluster is as resilient as possible.
113
-
114
- The default auto-reconfiguration
115
- behaviour is expected to give the best results in most situations. The current
116
- voting configuration is stored in the cluster state so you can inspect its
117
- current contents as follows:
118
-
119
- [source,js]
120
- --------------------------------------------------
121
- GET /_cluster/state?filter_path=metadata.cluster_coordination.last_committed_config
122
- --------------------------------------------------
123
- // CONSOLE
124
-
125
- NOTE: The current voting configuration is not necessarily the same as the set of
126
- all available master-eligible nodes in the cluster. Altering the voting
127
- configuration involves taking a vote, so it takes some time to adjust the
128
- configuration as nodes join or leave the cluster. Also, there are situations
129
- where the most resilient configuration includes unavailable nodes, or does not
130
- include some available nodes, and in these situations the voting configuration
131
- differs from the set of available master-eligible nodes in the cluster.
132
-
133
- Larger voting configurations are usually more resilient, so Elasticsearch
134
- normally prefers to add master-eligible nodes to the voting configuration after
135
- they join the cluster. Similarly, if a node in the voting configuration
136
- leaves the cluster and there is another master-eligible node in the cluster that
137
- is not in the voting configuration then it is preferable to swap these two nodes
138
- over. The size of the voting configuration is thus unchanged but its
139
- resilience increases.
140
-
141
- It is not so straightforward to automatically remove nodes from the voting
142
- configuration after they have left the cluster. Different strategies have
143
- different benefits and drawbacks, so the right choice depends on how the cluster
144
- will be used. You can control whether the voting configuration automatically shrinks by using the following setting:
145
-
146
- `cluster.auto_shrink_voting_configuration`::
147
-
148
- Defaults to `true`, meaning that the voting configuration will automatically
149
- shrink, shedding departed nodes, as long as it still contains at least 3
150
- nodes. If set to `false`, the voting configuration never automatically
151
- shrinks; departed nodes must be removed manually using the
152
- <<modules-discovery-adding-removing-nodes,voting configuration exclusions API>>.
153
-
154
- NOTE: If `cluster.auto_shrink_voting_configuration` is set to `true`, the
155
- recommended and default setting, and there are at least three master-eligible
156
- nodes in the cluster, then Elasticsearch remains capable of processing
157
- cluster-state updates as long as all but one of its master-eligible nodes are
158
- healthy.
159
-
160
- There are situations in which Elasticsearch might tolerate the loss of multiple
161
- nodes, but this is not guaranteed under all sequences of failures. If this
162
- setting is set to `false` then departed nodes must be removed from the voting
163
- configuration manually, using the
164
- <<modules-discovery-adding-removing-nodes,voting exclusions API>>, to achieve
165
- the desired level of resilience.
166
-
167
- No matter how it is configured, Elasticsearch will not suffer from a "split-brain" inconsistency.
168
- The `cluster.auto_shrink_voting_configuration` setting affects only its availability in the
169
- event of the failure of some of its nodes, and the administrative tasks that
170
- must be performed as nodes join and leave the cluster.
171
-
172
- [float]
173
- ==== Even numbers of master-eligible nodes
174
-
175
- There should normally be an odd number of master-eligible nodes in a cluster.
176
- If there is an even number, Elasticsearch leaves one of them out of the voting
177
- configuration to ensure that it has an odd size. This omission does not decrease
178
- the failure-tolerance of the cluster. In fact, improves it slightly: if the
179
- cluster suffers from a network partition that divides it into two equally-sized
180
- halves then one of the halves will contain a majority of the voting
181
- configuration and will be able to keep operating. If all of the master-eligible
182
- nodes' votes were counted, neither side would contain a strict majority of the
183
- nodes and so the cluster would not be able to make any progress.
184
-
185
- For instance if there are four master-eligible nodes in the cluster and the
186
- voting configuration contained all of them, any quorum-based decision would
187
- require votes from at least three of them. This situation means that the cluster
188
- can tolerate the loss of only a single master-eligible node. If this cluster
189
- were split into two equal halves, neither half would contain three
190
- master-eligible nodes and the cluster would not be able to make any progress.
191
- If the voting configuration contains only three of the four master-eligible
192
- nodes, however, the cluster is still only fully tolerant to the loss of one
193
- node, but quorum-based decisions require votes from two of the three voting
194
- nodes. In the event of an even split, one half will contain two of the three
195
- voting nodes so that half will remain available.
0 commit comments