Skip to content

Commit ff4a251

Browse files
Update experimental labels in the docs (#25727)
Relates #19798 Removed experimental label from: * Painless * Diversified Sampler Agg * Sampler Agg * Significant Terms Agg * Terms Agg document count error and execution_hint * Cardinality Agg precision_threshold * Pipeline Aggregations * index.shard.check_on_startup * index.store.type (added warning) * Preloading data into the file system cache * foreach ingest processor * Field caps API * Profile API Added experimental label to: * Moving Average Agg Prediction Changed experimental to beta for: * Adjacency matrix agg * Normalizers * Tasks API * Index sorting Labelled experimental in Lucene: * ICU plugin custom rules file * Flatten graph token filter * Synonym graph token filter * Word delimiter graph token filter * Simple pattern tokenizer * Simple pattern split tokenizer Replaced experimental label with warning that details may change in the future: * Analysis explain output format * Segments verbose output format * Percentile Agg compression and HDR Histogram * Percentile Rank Agg HDR Histogram
1 parent 0d8b753 commit ff4a251

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+22
-78
lines changed

docs/painless/painless-debugging.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[painless-debugging]]
22
=== Painless Debugging
33

4-
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
5-
64
==== Debug.Explain
75

86
Painless doesn't have a

docs/painless/painless-getting-started.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[painless-getting-started]]
22
== Getting Started with Painless
33

4-
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
5-
64
include::painless-description.asciidoc[]
75

86
[[painless-examples]]

docs/painless/painless-syntax.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[painless-syntax]]
22
=== Painless Syntax
33

4-
experimental[The Painless scripting language is new and is still marked as experimental. The syntax or API may be changed in the future in non-backwards compatible ways if required.]
5-
64
[float]
75
[[control-flow]]
86
==== Control flow

docs/plugins/analysis-icu.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -113,7 +113,7 @@ PUT icu_sample
113113

114114
===== Rules customization
115115

116-
experimental[]
116+
experimental[This functionality is marked as experimental in Lucene]
117117

118118
You can customize the `icu-tokenizer` behavior by specifying per-script rule files, see the
119119
http://userguide.icu-project.org/boundaryanalysis#TOC-RBBI-Rules[RBBI rules syntax reference]

docs/reference/aggregations/bucket/adjacency-matrix-aggregation.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ The request provides a collection of named filter expressions, similar to the `f
66
request.
77
Each bucket in the response represents a non-empty cell in the matrix of intersecting filters.
88

9-
experimental[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]
9+
beta[The `adjacency_matrix` aggregation is a new feature and we may evolve its design as we get feedback on its use. As a result, the API for this feature may change in non-backwards compatible ways]
1010

1111

1212
Given filters named `A`, `B` and `C` the response would return buckets with the following names:

docs/reference/aggregations/bucket/diversified-sampler-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-bucket-diversified-sampler-aggregation]]
22
=== Diversified Sampler Aggregation
33

4-
experimental[]
5-
64
Like the `sampler` aggregation this is a filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
75
The `diversified_sampler` aggregation adds the ability to limit the number of matches that share a common value such as an "author".
86

docs/reference/aggregations/bucket/sampler-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-bucket-sampler-aggregation]]
22
=== Sampler Aggregation
33

4-
experimental[]
5-
64
A filtering aggregation used to limit any sub aggregations' processing to a sample of the top-scoring documents.
75

86
.Example use cases:

docs/reference/aggregations/bucket/significantterms-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -3,8 +3,6 @@
33

44
An aggregation that returns interesting or unusual occurrences of terms in a set.
55

6-
experimental[The `significant_terms` aggregation can be very heavy when run on large indices. Work is in progress to provide more lightweight sampling techniques. As a result, the API for this feature may change in non-backwards compatible ways]
7-
86
.Example use cases:
97
* Suggesting "H5N1" when users search for "bird flu" in text
108
* Identifying the merchant that is the "common point of compromise" from the transaction history of credit card owners reporting loss

docs/reference/aggregations/bucket/terms-aggregation.asciidoc

Lines changed: 1 addition & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -197,8 +197,6 @@ could have the 4th highest document count.
197197

198198
==== Per bucket document count error
199199

200-
experimental[]
201-
202200
The second error value can be enabled by setting the `show_term_doc_count_error` parameter to true. This shows an error value
203201
for each term returned by the aggregation which represents the 'worst case' error in the document count and can be useful when
204202
deciding on a value for the `shard_size` parameter. This is calculated by summing the document counts for the last term returned
@@ -728,8 +726,6 @@ collection mode need to replay the query on the second pass but only for the doc
728726
[[search-aggregations-bucket-terms-aggregation-execution-hint]]
729727
==== Execution hint
730728

731-
experimental[The automated execution optimization is experimental, so this parameter is provided temporarily as a way to override the default behaviour]
732-
733729
There are different mechanisms by which terms aggregations can be executed:
734730

735731
- by using field values directly in order to aggregate data per-bucket (`map`)
@@ -767,7 +763,7 @@ in inner aggregations.
767763
}
768764
--------------------------------------------------
769765

770-
<1> experimental[] the possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
766+
<1> The possible values are `map`, `global_ordinals`, `global_ordinals_hash` and `global_ordinals_low_cardinality`
771767

772768
Please note that Elasticsearch will ignore this execution hint if it is not applicable and that there is no backward compatibility guarantee on these hints.
773769

docs/reference/aggregations/metrics/cardinality-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -43,8 +43,6 @@ Response:
4343

4444
This aggregation also supports the `precision_threshold` option:
4545

46-
experimental[The `precision_threshold` option is specific to the current internal implementation of the `cardinality` agg, which may change in the future]
47-
4846
[source,js]
4947
--------------------------------------------------
5048
POST /sales/_search?size=0

docs/reference/aggregations/metrics/percentile-aggregation.asciidoc

Lines changed: 1 addition & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -247,8 +247,6 @@ it. It would not be the case on more skewed distributions.
247247
[[search-aggregations-metrics-percentile-aggregation-compression]]
248248
==== Compression
249249

250-
experimental[The `compression` parameter is specific to the current internal implementation of percentiles, and may change in the future]
251-
252250
Approximate algorithms must balance memory utilization with estimation accuracy.
253251
This balance can be controlled using a `compression` parameter:
254252

@@ -287,7 +285,7 @@ the TDigest will use less memory.
287285

288286
==== HDR Histogram
289287

290-
experimental[]
288+
NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
291289

292290
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
293291
that can be useful when calculating percentiles for latency measurements as it can be faster than the t-digest implementation

docs/reference/aggregations/metrics/percentile-rank-aggregation.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -159,7 +159,7 @@ This will interpret the `script` parameter as an `inline` script with the `painl
159159

160160
==== HDR Histogram
161161

162-
experimental[]
162+
NOTE: This setting exposes the internal implementation of HDR Histogram and the syntax may change in the future.
163163

164164
https://github.com/HdrHistogram/HdrHistogram[HDR Histogram] (High Dynamic Range Histogram) is an alternative implementation
165165
that can be useful when calculating percentile ranks for latency measurements as it can be faster than the t-digest implementation

docs/reference/aggregations/pipeline.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -2,8 +2,6 @@
22

33
== Pipeline Aggregations
44

5-
experimental[]
6-
75
Pipeline aggregations work on the outputs produced from other aggregations rather than from document sets, adding
86
information to the output tree. There are many different types of pipeline aggregation, each computing different information from
97
other aggregations, but these types can be broken down into two families:

docs/reference/aggregations/pipeline/avg-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-avg-bucket-aggregation]]
22
=== Avg Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which calculates the (mean) average value of a specified metric in a sibling aggregation.
75
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
86

docs/reference/aggregations/pipeline/bucket-script-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-bucket-script-aggregation]]
22
=== Bucket Script Aggregation
33

4-
experimental[]
5-
64
A parent pipeline aggregation which executes a script which can perform per bucket computations on specified metrics
75
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a numeric value.
86

docs/reference/aggregations/pipeline/bucket-selector-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-bucket-selector-aggregation]]
22
=== Bucket Selector Aggregation
33

4-
experimental[]
5-
64
A parent pipeline aggregation which executes a script which determines whether the current bucket will be retained
75
in the parent multi-bucket aggregation. The specified metric must be numeric and the script must return a boolean value.
86
If the script language is `expression` then a numeric return value is permitted. In this case 0.0 will be evaluated as `false`

docs/reference/aggregations/pipeline/cumulative-sum-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-cumulative-sum-aggregation]]
22
=== Cumulative Sum Aggregation
33

4-
experimental[]
5-
64
A parent pipeline aggregation which calculates the cumulative sum of a specified metric in a parent histogram (or date_histogram)
75
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
86
for `histogram` aggregations).

docs/reference/aggregations/pipeline/derivative-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-derivative-aggregation]]
22
=== Derivative Aggregation
33

4-
experimental[]
5-
64
A parent pipeline aggregation which calculates the derivative of a specified metric in a parent histogram (or date_histogram)
75
aggregation. The specified metric must be numeric and the enclosing histogram must have `min_doc_count` set to `0` (default
86
for `histogram` aggregations).

docs/reference/aggregations/pipeline/extended-stats-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-extended-stats-bucket-aggregation]]
22
=== Extended Stats Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
75
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
86

docs/reference/aggregations/pipeline/max-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-max-bucket-aggregation]]
22
=== Max Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which identifies the bucket(s) with the maximum value of a specified metric in a sibling aggregation
75
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
86
be a multi-bucket aggregation.

docs/reference/aggregations/pipeline/min-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-min-bucket-aggregation]]
22
=== Min Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which identifies the bucket(s) with the minimum value of a specified metric in a sibling aggregation
75
and outputs both the value and the key(s) of the bucket(s). The specified metric must be numeric and the sibling aggregation must
86
be a multi-bucket aggregation.

docs/reference/aggregations/pipeline/movavg-aggregation.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-movavg-aggregation]]
22
=== Moving Average Aggregation
33

4-
experimental[]
5-
64
Given an ordered series of data, the Moving Average aggregation will slide a window across the data and emit the average
75
value of that window. For example, given the data `[1, 2, 3, 4, 5, 6, 7, 8, 9, 10]`, we can calculate a simple moving
86
average with windows size of `5` as follows:
@@ -513,6 +511,8 @@ POST /_search
513511

514512
==== Prediction
515513

514+
experimental[]
515+
516516
All the moving average model support a "prediction" mode, which will attempt to extrapolate into the future given the
517517
current smoothed, moving average. Depending on the model and parameter, these predictions may or may not be accurate.
518518

docs/reference/aggregations/pipeline/percentiles-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-percentiles-bucket-aggregation]]
22
=== Percentiles Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which calculates percentiles across all bucket of a specified metric in a sibling aggregation.
75
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
86

docs/reference/aggregations/pipeline/serial-diff-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-serialdiff-aggregation]]
22
=== Serial Differencing Aggregation
33

4-
experimental[]
5-
64
Serial differencing is a technique where values in a time series are subtracted from itself at
75
different time lags or periods. For example, the datapoint f(x) = f(x~t~) - f(x~t-n~), where n is the period being used.
86

docs/reference/aggregations/pipeline/stats-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-stats-bucket-aggregation]]
22
=== Stats Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which calculates a variety of stats across all bucket of a specified metric in a sibling aggregation.
75
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
86

docs/reference/aggregations/pipeline/sum-bucket-aggregation.asciidoc

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[search-aggregations-pipeline-sum-bucket-aggregation]]
22
=== Sum Bucket Aggregation
33

4-
experimental[]
5-
64
A sibling pipeline aggregation which calculates the sum across all bucket of a specified metric in a sibling aggregation.
75
The specified metric must be numeric and the sibling aggregation must be a multi-bucket aggregation.
86

docs/reference/analysis/normalizers.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-normalizers]]
22
== Normalizers
33

4-
experimental[]
4+
beta[]
55

66
Normalizers are similar to analyzers except that they may only emit a single
77
token. As a consequence, they do not have a tokenizer and only accept a subset

docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-flatten-graph-tokenfilter]]
22
=== Flatten Graph Token Filter
33

4-
experimental[]
4+
experimental[This functionality is marked as experimental in Lucene]
55

66
The `flatten_graph` token filter accepts an arbitrary graph token
77
stream, such as that produced by

docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-synonym-graph-tokenfilter]]
22
=== Synonym Graph Token Filter
33

4-
experimental[]
4+
experimental[This functionality is marked as experimental in Lucene]
55

66
The `synonym_graph` token filter allows to easily handle synonyms,
77
including multi-word synonyms correctly during the analysis process.

docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-word-delimiter-graph-tokenfilter]]
22
=== Word Delimiter Graph Token Filter
33

4-
experimental[]
4+
experimental[This functionality is marked as experimental in Lucene]
55

66
Named `word_delimiter_graph`, it splits words into subwords and performs
77
optional transformations on subword groups. Words are split into

docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-simplepattern-tokenizer]]
22
=== Simple Pattern Tokenizer
33

4-
experimental[]
4+
experimental[This functionality is marked as experimental in Lucene]
55

66
The `simple_pattern` tokenizer uses a regular expression to capture matching
77
text as terms. The set of regular expression features it supports is more

docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[analysis-simplepatternsplit-tokenizer]]
22
=== Simple Pattern Split Tokenizer
33

4-
experimental[]
4+
experimental[This functionality is marked as experimental in Lucene]
55

66
The `simple_pattern_split` tokenizer uses a regular expression to split the
77
input into terms at pattern matches. The set of regular expression features it

docs/reference/cluster/tasks.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[tasks]]
22
== Task Management API
33

4-
experimental[The Task Management API is new and should still be considered experimental. The API may change in ways that are not backwards compatible]
4+
beta[The Task Management API is new and should still be considered a beta feature. The API may change in ways that are not backwards compatible]
55

66
[float]
77
=== Current Tasks Information

docs/reference/index-modules.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ specific index module:
4747
`index.shard.check_on_startup`::
4848
+
4949
--
50-
experimental[] Whether or not shards should be checked for corruption before opening. When
50+
Whether or not shards should be checked for corruption before opening. When
5151
corruption is detected, it will prevent the shard from being opened. Accepts:
5252

5353
`false`::
@@ -69,7 +69,7 @@ corruption is detected, it will prevent the shard from being opened. Accepts:
6969
as corrupted will be automatically removed. This option *may result in data loss*.
7070
Use with extreme caution!
7171

72-
Checking shards may take a lot of time on large indices.
72+
WARNING: Expert only. Checking shards may take a lot of time on large indices.
7373
--
7474

7575
[[index-codec]] `index.codec`::

docs/reference/index-modules/index-sorting.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
[[index-modules-index-sorting]]
22
== Index Sorting
33

4-
experimental[]
4+
beta[]
55

66
When creating a new index in elasticsearch it is possible to configure how the Segments
77
inside each Shard will be sorted. By default Lucene does not apply any sort.

docs/reference/index-modules/store.asciidoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -32,7 +32,7 @@ PUT /my_index
3232
}
3333
---------------------------------
3434

35-
experimental[This is an expert-only setting and may be removed in the future]
35+
WARNING: This is an expert-only setting and may be removed in the future.
3636

3737
The following sections lists all the different storage types supported.
3838

@@ -73,7 +73,7 @@ compatibility.
7373

7474
=== Pre-loading data into the file system cache
7575

76-
experimental[This is an expert-only setting and may be removed in the future]
76+
NOTE: This is an expert setting, the details of which may change in the future.
7777

7878
By default, elasticsearch completely relies on the operating system file system
7979
cache for caching I/O operations. It is possible to set `index.store.preload`

docs/reference/indices/analyze.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -144,7 +144,7 @@ GET _analyze
144144
If you want to get more advanced details, set `explain` to `true` (defaults to `false`). It will output all token attributes for each token.
145145
You can filter token attributes you want to output by setting `attributes` option.
146146

147-
experimental[The format of the additional detail information is experimental and can change at any time]
147+
NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
148148

149149
[source,js]
150150
--------------------------------------------------

docs/reference/indices/segments.asciidoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ compound:: Whether the segment is stored in a compound file. When true, this
7979

8080
To add additional information that can be used for debugging, use the `verbose` flag.
8181

82-
experimental[The format of the additional verbose information is experimental and can change at any time]
82+
NOTE: The format of the additional detail information is labelled as experimental in Lucene and it may change in the future.
8383

8484
[source,js]
8585
--------------------------------------------------

0 commit comments

Comments
 (0)