From d54819e2800a3fef20934ab15ff9527254db3c7f Mon Sep 17 00:00:00 2001 From: David Turner Date: Fri, 16 Mar 2018 16:50:27 +0000 Subject: [PATCH 1/2] Update allocation awareness docs Today, the docs imply that if multiple attributes are specified the the whole combination of values is considered as a single entity when performing allocation. In fact, each attribute is considered separately. This change fixes this discrepancy. It also replaces the use of the term "awareness zone" with "zone or domain", and reformats some paragraphs to the right width. Fixes #29105 --- .../cluster/allocation_awareness.asciidoc | 50 +++++++++---------- 1 file changed, 25 insertions(+), 25 deletions(-) diff --git a/docs/reference/modules/cluster/allocation_awareness.asciidoc b/docs/reference/modules/cluster/allocation_awareness.asciidoc index 8ffa4e6b06d2a..e996fcfa670d3 100644 --- a/docs/reference/modules/cluster/allocation_awareness.asciidoc +++ b/docs/reference/modules/cluster/allocation_awareness.asciidoc @@ -2,8 +2,8 @@ === Shard Allocation Awareness When running nodes on multiple VMs on the same physical server, on multiple -racks, or across multiple awareness zones, it is more likely that two nodes on -the same physical server, in the same rack, or in the same awareness zone will +racks, or across multiple zones or domains, it is more likely that two nodes on +the same physical server, in the same rack, or in the same zone or domain will crash at the same time, rather than two unrelated nodes crashing simultaneously. @@ -25,7 +25,7 @@ attribute called `rack_id` -- we could use any attribute name. For example: ---------------------- <1> This setting could also be specified in the `elasticsearch.yml` config file. -Now, we need to setup _shard allocation awareness_ by telling Elasticsearch +Now, we need to set up _shard allocation awareness_ by telling Elasticsearch which attributes to use. This can be configured in the `elasticsearch.yml` file on *all* master-eligible nodes, or it can be set (and changed) with the <> API. @@ -37,16 +37,16 @@ For our example, we'll set the value in the config file: cluster.routing.allocation.awareness.attributes: rack_id -------------------------------------------------------- -With this config in place, let's say we start two nodes with `node.attr.rack_id` -set to `rack_one`, and we create an index with 5 primary shards and 1 replica -of each primary. All primaries and replicas are allocated across the two -nodes. +With this config in place, let's say we start two nodes with +`node.attr.rack_id` set to `rack_one`, and we create an index with 5 primary +shards and 1 replica of each primary. All primaries and replicas are +allocated across the two nodes. Now, if we start two more nodes with `node.attr.rack_id` set to `rack_two`, Elasticsearch will move shards across to the new nodes, ensuring (if possible) -that no two copies of the same shard will be in the same rack. However if `rack_two` -were to fail, taking down both of its nodes, Elasticsearch will still allocate the lost -shard copies to nodes in `rack_one`. +that no two copies of the same shard will be in the same rack. However if +`rack_two` were to fail, taking down both of its nodes, Elasticsearch will +still allocate the lost shard copies to nodes in `rack_one`. .Prefer local shards ********************************************* @@ -58,21 +58,21 @@ awareness zones. ********************************************* -Multiple awareness attributes can be specified, in which case the combination -of values from each attribute is considered to be a separate value. +Multiple awareness attributes can be specified, in which case each attribute +is considered separately when deciding where to allocate the shards. [source,yaml] ------------------------------------------------------------- cluster.routing.allocation.awareness.attributes: rack_id,zone ------------------------------------------------------------- -NOTE: When using awareness attributes, shards will not be allocated to -nodes that don't have values set for those attributes. +NOTE: When using awareness attributes, shards will not be allocated to nodes +that don't have values set for those attributes. -NOTE: Number of primary/replica of a shard allocated on a specific group -of nodes with the same awareness attribute value is determined by the number -of attribute values. When the number of nodes in groups is unbalanced and -there are many replicas, replica shards may be left unassigned. +NOTE: Number of primary/replica of a shard allocated on a specific group of +nodes with the same awareness attribute value is determined by the number of +attribute values. When the number of nodes in groups is unbalanced and there +are many replicas, replica shards may be left unassigned. [float] [[forced-awareness]] @@ -91,9 +91,9 @@ remaining zone to be overloaded. Forced awareness solves this problem by *NEVER* allowing copies of the same shard to be allocated to the same zone. -For example, lets say we have an awareness attribute called `zone`, and -we know we are going to have two zones, `zone1` and `zone2`. Here is how -we can force awareness on a node: +For example, lets say we have an awareness attribute called `zone`, and we +know we are going to have two zones, `zone1` and `zone2`. Here is how we can +force awareness on a node: [source,yaml] ------------------------------------------------------------------- @@ -102,10 +102,10 @@ cluster.routing.allocation.awareness.attributes: zone ------------------------------------------------------------------- <1> We must list all possible values that the `zone` attribute can have. -Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an index -with 5 shards and 1 replica. The index will be created, but only the 5 primary -shards will be allocated (with no replicas). Only when we start more nodes -with `node.attr.zone` set to `zone2` will the replicas be allocated. +Now, if we start 2 nodes with `node.attr.zone` set to `zone1` and create an +index with 5 shards and 1 replica. The index will be created, but only the 5 +primary shards will be allocated (with no replicas). Only when we start more +nodes with `node.attr.zone` set to `zone2` will the replicas be allocated. The `cluster.routing.allocation.awareness.*` settings can all be updated dynamically on a live cluster with the From b2742ef733fbbeb7c6ef9f62bc907f87f17a7aed Mon Sep 17 00:00:00 2001 From: David Turner Date: Sat, 17 Mar 2018 09:00:24 +0000 Subject: [PATCH 2/2] Missed a couple of refs to 'awareness zone' --- .../modules/cluster/allocation_awareness.asciidoc | 12 ++++++------ 1 file changed, 6 insertions(+), 6 deletions(-) diff --git a/docs/reference/modules/cluster/allocation_awareness.asciidoc b/docs/reference/modules/cluster/allocation_awareness.asciidoc index e996fcfa670d3..9eb47e0730c93 100644 --- a/docs/reference/modules/cluster/allocation_awareness.asciidoc +++ b/docs/reference/modules/cluster/allocation_awareness.asciidoc @@ -53,8 +53,8 @@ still allocate the lost shard copies to nodes in `rack_one`. When executing search or GET requests, with shard awareness enabled, Elasticsearch will prefer using local shards -- shards in the same awareness -group -- to execute the request. This is usually faster than crossing racks or -awareness zones. +group -- to execute the request. This is usually faster than crossing between +racks or across zone boundaries. ********************************************* @@ -78,10 +78,10 @@ are many replicas, replica shards may be left unassigned. [[forced-awareness]] === Forced Awareness -Imagine that you have two awareness zones and enough hardware across the two -zones to host all of your primary and replica shards. But perhaps the -hardware in a single zone, while sufficient to host half the shards, would be -unable to host *ALL* the shards. +Imagine that you have two zones and enough hardware across the two zones to +host all of your primary and replica shards. But perhaps the hardware in a +single zone, while sufficient to host half the shards, would be unable to host +*ALL* the shards. With ordinary awareness, if one zone lost contact with the other zone, Elasticsearch would assign all of the missing replica shards to a single zone.