Skip to content

Additional docs for shared_cache searchable snapshots #70566

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 15 commits into from
Mar 22, 2021
Merged
27 changes: 21 additions & 6 deletions docs/reference/datatiers.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -87,12 +87,27 @@ For resiliency, indices in the cold tier can rely on

experimental::[]

Once data is no longer being queried, or being queried rarely, it may move from the cold tier
to the frozen tier where it stays for the rest of its life.
The frozen tier is a less responsive query tier than the cold tier, and data in the frozen tier
cannot be updated. The frozen tier holds searchable snapshots mounted using the
`shared_cache` storage option exclusively. The <<ilm-index-lifecycle, frozen phase>> converts data
transitioning into the frozen tier into a searchable snapshot, eliminating the need for replicas or even a local copy.
Once data is no longer being queried, or being queried rarely, it may move from
the cold tier to the frozen tier where it stays for the rest of its life.

The frozen tier uses <<searchable-snapshots,{search-snaps}>> to store and load
data from a snapshot repository. Instead of using a full local copy of your
data, these {search-snaps} use smaller <<shared-cache,local caches>> containing
only recently searched data. If a search requires data that is not in a cache,
{es} fetches the data as needed from the snapshot repository. This decouples
compute and storage, letting you run searches over very large data sets with
minimal compute resources, which significantly reduces your storage and
operating costs.

The <<ilm-index-lifecycle, frozen phase>> automatically converts data
transitioning into the frozen tier into a shared-cache searchable snapshot.

Search is typically slower on the frozen tier than the cold tier, because {es}
must sometimes fetch data from the snapshot repository.

NOTE: The frozen tier is not yet available on the {ess-trial}[{ess}]. To
recreate similar functionality, see
<<searchable-snapshots-frozen-tier-on-cloud>>.

[discrete]
[[data-tier-allocation]]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -56,6 +56,7 @@ searches of the mounted index. If `full_copy`, each node holding a shard of the
searchable snapshot index makes a full copy of the shard to its local storage.
If `shared_cache`, the shard uses the
<<searchable-snapshots-shared-cache,shared cache>>. Defaults to `full_copy`.
See <<searchable-snapshot-mount-storage-options>>.

[[searchable-snapshots-api-mount-request-body]]
==== {api-request-body-title}
Expand Down
199 changes: 138 additions & 61 deletions docs/reference/searchable-snapshots/index.asciidoc
Original file line number Diff line number Diff line change
@@ -1,55 +1,47 @@
[[searchable-snapshots]]
== {search-snaps-cap}

{search-snaps-cap} let you reduce your operating costs by using
<<snapshot-restore, snapshots>> for resiliency rather than maintaining
<<scalability,replica shards>> within a cluster. When you mount an index from a
snapshot as a {search-snap}, {es} copies the index shards to local storage
within the cluster. This ensures that search performance is comparable to
searching any other index, and minimizes the need to access the snapshot
repository. Should a node fail, shards of a {search-snap} index are
automatically recovered from the snapshot repository.

This can result in significant cost savings for less frequently searched data.
With {search-snaps}, you no longer need an extra index shard copy to avoid data
loss, potentially halving the node local storage capacity necessary for
searching that data. Because {search-snaps} rely on the same snapshot mechanism
you use for backups, they have a minimal impact on your snapshot repository
storage costs.
{search-snaps-cap} let you use <<snapshot-restore,snapshots>> to search
infrequently accessed and read-only data in a very cost-effective fashion. The
<<cold-tier,cold>> and <<frozen-tier,frozen>> data tiers use {search-snaps} to
reduce your storage and operating costs.

{search-snaps-cap} eliminate the need for <<scalability,replica shards>>,
potentially halving the local storage needed to search your data.
{search-snaps-cap} rely on the same snapshot mechanism you already use for
backups and have minimal impact on your snapshot repository storage costs.

[discrete]
[[using-searchable-snapshots]]
=== Using {search-snaps}

Searching a {search-snap} index is the same as searching any other index.
Search performance is comparable to regular indices because the shard data is
copied onto nodes in the cluster when the {search-snap} is mounted.

By default, {search-snap} indices have no replicas. The underlying snapshot
provides resilience and the query volume is expected to be low enough that a
single shard copy will be sufficient. However, if you need to support a higher
query volume, you can add replicas by adjusting the `index.number_of_replicas`
index setting.

If a node fails and {search-snap} shards need to be restored from the snapshot,
there is a brief window of time while {es} allocates the shards to other nodes
where the cluster health will not be `green`. Searches that hit these shards
will fail or return partial results until the shards are reallocated to healthy
nodes.
If a node fails and {search-snap} shards need to be recovered elsewhere, there
is a brief window of time while {es} allocates the shards to other nodes where
the cluster health will not be `green`. Searches that hit these shards may fail
or return partial results until the shards are reallocated to healthy nodes.

You typically manage {search-snaps} through {ilm-init}. The
<<ilm-searchable-snapshot, searchable snapshots>> action automatically converts
a regular index into a {search-snap} index when it reaches the `cold` phase.
You can also make indices in existing snapshots searchable by manually mounting
them as {search-snap} indices with the
<<searchable-snapshots-api-mount-snapshot, mount snapshot>> API.
a regular index into a {search-snap} index when it reaches the `cold` or
`frozen` phase. You can also make indices in existing snapshots searchable by
manually mounting them using the <<searchable-snapshots-api-mount-snapshot,
mount snapshot>> API.

To mount an index from a snapshot that contains multiple indices, we recommend
creating a <<clone-snapshot-api, clone>> of the snapshot that contains only the
index you want to search, and mounting the clone. You should not delete a
snapshot if it has any mounted indices, so creating a clone enables you to
manage the lifecycle of the backup snapshot independently of any
{search-snaps}.
{search-snaps}. If you use {ilm-init} to manage your {search-snaps} then it
will automatically look after cloning the snapshot as needed.

You can control the allocation of the shards of {search-snap} indices using the
same mechanisms as for regular indices. For example, you could use
Expand All @@ -60,7 +52,7 @@ We recommend that you <<indices-forcemerge, force-merge>> indices to a single
segment per shard before taking a snapshot that will be mounted as a
{search-snap} index. Each read from a snapshot repository takes time and costs
money, and the fewer segments there are the fewer reads are needed to restore
the snapshot.
the snapshot or to respond to a search.

[TIP]
====
Expand All @@ -84,35 +76,104 @@ You can use any of the following repository types with searchable snapshots:
You can also use alternative implementations of these repository types, for
instance
{plugins}/repository-s3-client.html#repository-s3-compatible-services[Minio],
as long as they are fully compatible.
as long as they are fully compatible. You can use the <<repo-analysis-api>> API
to analyze your repository's suitability for use with searchable snapshots.

[discrete]
[[how-searchable-snapshots-work]]
=== How {search-snaps} work

When an index is mounted from a snapshot, {es} allocates its shards to data
nodes within the cluster. The data nodes then automatically restore the shard
data from the repository onto local storage. Once the restore process
completes, these shards respond to searches using the data held in local
storage and do not need to access the repository. This avoids incurring the
cost or performance penalty associated with reading data from the repository.

If a node holding one of these shards fails, {es} automatically allocates it to
another node, and that node restores the shard data from the repository. No
replicas are needed, and no complicated monitoring or orchestration is
necessary to restore lost shards.

{es} restores {search-snap} shards in the background and you can search them
even if they have not been fully restored. If a search hits a {search-snap}
shard before it has been fully restored, {es} eagerly retrieves the data needed
for the search. If a shard is freshly allocated to a node and still warming up,
some searches will be slower. However, searches typically access a very small
fraction of the total shard data so the performance penalty is typically small.

Replicas of {search-snaps} shards are restored by copying data from the
snapshot repository. In contrast, replicas of regular indices are restored by
nodes within the cluster. The data nodes then automatically retrieve the
relevant shard data from the repository onto local storage, based on the
<<searchable-snapshot-mount-storage-options,mount options>> specified. If
possible, searches use data from local storage. If the data is not available
locally, {es} downloads the data that it needs from the snapshot repository.

If a node holding one of these shards fails, {es} automatically allocates the
affected shards on another node, and that node restores the relevant shard data
from the repository. No replicas are needed, and no complicated monitoring or
orchestration is necessary to restore lost shards. Although searchable snapshot
indices have no replicas by default, you may add replicas to these indices by
adjusting `index.number_of_replicas`. Replicas of {search-snap} shards are
recovered by copying data from the snapshot repository, just like primaries of
{search-snap} shards. In contrast, replicas of regular indices are restored by
copying data from the primary.

[discrete]
[[searchable-snapshot-mount-storage-options]]
==== Mount options

To search a snapshot, you must first mount it locally as an index. Usually
{ilm-init} will do this automatically, but you can also call the
<<searchable-snapshots-api-mount-snapshot,mount snapshot>> API yourself. There
are two options for mounting a snapshot, each with different performance
characteristics and local storage footprints:

[[full-copy]]
Full copy::
Loads a full copy of the snapshotted index's shards onto node-local storage
within the cluster. This is the default mount option. {ilm-init} uses this
option by default in the `hot` and `cold` phases.
+
Search performance for a full-copy searchable snapshot index is normally
comparable to a regular index, since there is minimal need to access the
snapshot repository. While recovery is ongoing, search performance may be
slower than with a regular index because a search may need some data that has
not yet been retrieved into the local copy. If that happens, {es} will eagerly
retrieve the data needed to complete the search in parallel with the ongoing
recovery.

[[shared-cache]]
Shared cache::
+
experimental::[]
+
Uses a local cache containing only recently searched parts of the snapshotted
index's data. {ilm-init} uses this option by default in the `frozen` phase and
corresponding frozen tier.
+
If a search requires data that is not in the cache, {es} fetches the missing
data from the snapshot repository. Searches that require these fetches are
slower, but the fetched data is stored in the cache so that similar searches
can be served more quickly in future. {es} will evict infrequently used data
from the cache to free up space.
+
Although slower than a full local copy or a regular index, a shared-cache
searchable snapshot index still returns search results quickly, even for large
data sets, because the layout of data in the repository is heavily optimized
for search. Many searches will need to retrieve only a small subset of the
total shard data before returning results.

To mount a searchable snapshot index with the shared cache mount option, you
must configure the `xpack.searchable.snapshot.shared_cache.size` setting to
reserve space for the cache on one or more nodes. Indices mounted with the
shared cache mount option are only allocated to nodes that have this setting
configured.

[[searchable-snapshots-shared-cache]]
`xpack.searchable.snapshot.shared_cache.size`::
(<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
The size of the space reserved for the shared cache. Defaults to `0b`, meaning
that the node has no shared cache.

You can configure the setting in `elasticsearch.yml`:

[source,yaml]
----
xpack.searchable.snapshot.shared_cache.size: 4TB
----

IMPORTANT: Currently, you can configure
`xpack.searchable.snapshot.shared_cache.size` on any node. In a future release,
you will only be able to configure this setting on nodes with the
<<data-frozen-node,`data_frozen`>> role.

You can set `xpack.searchable.snapshot.shared_cache.size` to any size between a
couple of gigabytes up to 90% of available disk space. We only recommend higher
sizes if you use the node exclusively on a frozen tier or for searchable
snapshots.

[discrete]
[[back-up-restore-searchable-snapshots]]
=== Back up and restore {search-snaps}
Expand Down Expand Up @@ -150,18 +211,34 @@ very good protection against data loss or corruption. If you manage your own
repository storage then you are responsible for its reliability.

[discrete]
[[searchable-snapshots-shared-cache]]
=== Shared snapshot cache
[[searchable-snapshots-frozen-tier-on-cloud]]
=== Configure a frozen tier on the {ess}

experimental::[]
The frozen data tier is not yet available on the {ess-trial}[{ess}]. However,
you can configure another tier to use <<shared-cache,shared snapshot caches>>.
This effectively recreates a frozen tier in your {ess} deployment. Follow these
steps:

By default a {search-snap} copies the whole snapshot into the local cluster as
described above. You can also configure a shared snapshot cache which is used
to hold a copy of just the frequently-accessed parts of shards of indices which
are mounted with `?storage=shared_cache`. If you configure a node to have a
shared cache then that node will reserve space for the cache when it starts up.
. Choose an existing tier to use. Typically, you'll use the cold tier, but the
hot and warm tiers are also supported. You can use this tier as a shared tier,
or you can dedicate the tier exclusively to shared snapshot caches.

`xpack.searchable.snapshot.shared_cache.size`::
(<<static-cluster-setting,Static>>, <<byte-units,byte value>>)
The size of the space reserved for the shared cache. Defaults to `0b`, meaning
that the node has no shared cache.
. Log in to the {ess-trial}[{ess} Console].

. Select your deployment from the {ess} home page or the deployments page.

. From your deployment menu, select **Edit deployment**.

. On the **Edit** page, click **Edit elasticsearch.yml** under your selected
{es} tier.

. In the `elasticsearch.yml` file, add the
<<searchable-snapshots-shared-cache,`xpack.searchable.snapshot.shared_cache.size`>>
setting. For example:
+
[source,yaml]
----
xpack.searchable.snapshot.shared_cache.size: 50GB
----

. Click **Save** and **Confirm** to apply your configuration changes.