@@ -117,76 +117,75 @@ copying data from the primary.
117
117
To search a snapshot, you must first mount it locally as an index. Usually
118
118
{ilm-init} will do this automatically, but you can also call the
119
119
<<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
120
- are two options for mounting a snapshot, each with different performance
121
- characteristics and local storage footprints:
120
+ are two options for mounting an index from a snapshot, each with different
121
+ performance characteristics and local storage footprints:
122
122
123
- [[full-copy ]]
124
- Full copy ::
123
+ [[fully-mounted ]]
124
+ Fully mounted index ::
125
125
Loads a full copy of the snapshotted index's shards onto node-local storage
126
- within the cluster. This is the default mount option. {ilm-init} uses this
127
- option by default in the `hot` and `cold` phases.
126
+ within the cluster. {ilm-init} uses this option in the `hot` and `cold` phases.
128
127
+
129
- Search performance for a full-copy searchable snapshot index is normally
128
+ Search performance for a fully mounted index is normally
130
129
comparable to a regular index, since there is minimal need to access the
131
130
snapshot repository. While recovery is ongoing, search performance may be
132
131
slower than with a regular index because a search may need some data that has
133
132
not yet been retrieved into the local copy. If that happens, {es} will eagerly
134
133
retrieve the data needed to complete the search in parallel with the ongoing
135
134
recovery.
136
135
137
- [[shared-cache ]]
138
- Shared cache ::
136
+ [[partially-mounted ]]
137
+ Partially mounted index ::
139
138
Uses a local cache containing only recently searched parts of the snapshotted
140
- index's data. {ilm-init} uses this option by default in the `frozen` phase and
141
- corresponding frozen tier.
139
+ index's data. This cache has a fixed size and is shared across nodes in the
140
+ frozen tier. {ilm-init} uses this option in the `frozen` phase .
142
141
+
143
142
If a search requires data that is not in the cache, {es} fetches the missing
144
143
data from the snapshot repository. Searches that require these fetches are
145
144
slower, but the fetched data is stored in the cache so that similar searches
146
145
can be served more quickly in future. {es} will evict infrequently used data
147
146
from the cache to free up space.
148
147
+
149
- Although slower than a full local copy or a regular index, a shared-cache
150
- searchable snapshot index still returns search results quickly, even for large
151
- data sets, because the layout of data in the repository is heavily optimized
152
- for search. Many searches will need to retrieve only a small subset of the
153
- total shard data before returning results.
154
-
155
- To mount a searchable snapshot index with the shared cache mount option, you
156
- must have one or more nodes with a shared cache available. By default,
157
- dedicated frozen data tier nodes (nodes with the `data_frozen` role and no other
158
- data roles) have a shared cache configured using the greater of 90% of total
159
- disk space and total disk space subtracted a headroom of 100GB.
148
+ Although slower than a fully mounted index or a regular index, a
149
+ partially mounted index still returns search results quickly, even for
150
+ large data sets, because the layout of data in the repository is heavily
151
+ optimized for search. Many searches will need to retrieve only a small subset of
152
+ the total shard data before returning results.
153
+
154
+ To partially mount an index, you must have one or more nodes with a shared cache
155
+ available. By default, dedicated frozen data tier nodes (nodes with the
156
+ `data_frozen` role and no other data roles) have a shared cache configured using
157
+ the greater of 90% of total disk space and total disk space subtracted a
158
+ headroom of 100GB.
160
159
161
160
Using a dedicated frozen tier is highly recommended for production use. If you
162
161
do not have a dedicated frozen tier, you must configure the
163
162
`xpack.searchable.snapshot.shared_cache.size` setting to reserve space for the
164
- cache on one or more nodes. Indices mounted with the shared cache mount option
163
+ cache on one or more nodes. Partially mounted indices
165
164
are only allocated to nodes that have a shared cache.
166
165
167
166
[[searchable-snapshots-shared-cache]]
168
167
`xpack.searchable.snapshot.shared_cache.size`::
169
168
(<<static-cluster-setting,Static>>)
170
- The size of the space reserved for the shared cache, either specified as a
171
- percentage of total disk space or an absolute <<byte-units,byte value>>.
172
- Defaults to 90% of total disk space on dedicated frozen data tier nodes,
173
- otherwise `0b`.
169
+ Disk space reserved for the shared cache of partially mounted indices.
170
+ Accepts a percentage of total disk space or an absolute <<byte-units,byte
171
+ value>>. Defaults to ` 90%` of total disk space for dedicated frozen data tier
172
+ nodes. Otherwise defaults to `0b`.
174
173
175
174
`xpack.searchable.snapshot.shared_cache.size.max_headroom`::
176
175
(<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
177
- For dedicated frozen tier nodes, the max headroom to maintain. Defaults to 100GB
178
- on dedicated frozen tier nodes when
179
- `xpack.searchable.snapshot.shared_cache.size` is not explicitly set, otherwise
180
- -1 (not set). Can only be set when `xpack.searchable.snapshot.shared_cache.size`
181
- is set as a percentage.
176
+ For dedicated frozen tier nodes, the max headroom to maintain. If
177
+ `xpack.searchable.snapshot.shared_cache.size` is not explicitly set, this
178
+ setting defaults to `100GB`. Otherwise it defaults to `-1` (not set). You can
179
+ only configure this setting if `xpack.searchable.snapshot.shared_cache.size` is
180
+ set as a percentage.
182
181
183
182
To illustrate how these settings work in concert let us look at two examples
184
183
when using the default values of the settings on a dedicated frozen node:
185
184
186
185
* A 4000 GB disk will result in a shared cache sized at 3900 GB. 90% of 4000 GB
187
186
is 3600 GB, leaving 400 GB headroom. The default `max_headroom` of 100 GB
188
187
takes effect, and the result is therefore 3900 GB.
189
- * A 400 GB disk will result in a shard cache sized at 360 GB.
188
+ * A 400 GB disk will result in a shared cache sized at 360 GB.
190
189
191
190
You can configure the settings in `elasticsearch.yml`:
192
191
@@ -199,11 +198,6 @@ IMPORTANT: You can only configure these settings on nodes with the
199
198
<<data-frozen-node,`data_frozen`>> role. Additionally, nodes with a shared
200
199
cache can only have a single <<path-settings,data path>>.
201
200
202
- You can set `xpack.searchable.snapshot.shared_cache.size` to any size between a
203
- couple of gigabytes up to 90% of available disk space. We only recommend larger
204
- sizes if you use the node exclusively on a frozen tier or for searchable
205
- snapshots.
206
-
207
201
[discrete]
208
202
[[back-up-restore-searchable-snapshots]]
209
203
=== Back up and restore {search-snaps}
0 commit comments