Skip to content

Commit 9ad0597

Browse files
jimczijrodewig
andauthored
Removes old Lucene's experimental flag from analyzer documentations (#53217)
This change removes the Lucene's experimental flag from the documentations of the following tokenizer/filters: * Simple Pattern Split Tokenizer * Simple Pattern tokenizer * Flatten Graph Token Filter * Word Delimiter Graph Token Filter The flag is still present in Lucene codebase but we're fully supporting these tokenizers/filters in ES for a long time now so the docs flag is misleading. Co-authored-by: James Rodewig <[email protected]>
1 parent 58340c2 commit 9ad0597

File tree

3 files changed

+0
-6
lines changed

3 files changed

+0
-6
lines changed

docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc

-2
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,6 @@
44
<titleabbrev>Flatten graph</titleabbrev>
55
++++
66

7-
experimental[This functionality is marked as experimental in Lucene]
8-
97
The `flatten_graph` token filter accepts an arbitrary graph token
108
stream, such as that produced by
119
<<analysis-synonym-graph-tokenfilter>>, and flattens it into a single

docs/reference/analysis/tokenizers/simplepattern-tokenizer.asciidoc

-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[analysis-simplepattern-tokenizer]]
22
=== Simple Pattern Tokenizer
33

4-
experimental[This functionality is marked as experimental in Lucene]
5-
64
The `simple_pattern` tokenizer uses a regular expression to capture matching
75
text as terms. The set of regular expression features it supports is more
86
limited than the <<analysis-pattern-tokenizer,`pattern`>> tokenizer, but the

docs/reference/analysis/tokenizers/simplepatternsplit-tokenizer.asciidoc

-2
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,6 @@
11
[[analysis-simplepatternsplit-tokenizer]]
22
=== Simple Pattern Split Tokenizer
33

4-
experimental[This functionality is marked as experimental in Lucene]
5-
64
The `simple_pattern_split` tokenizer uses a regular expression to split the
75
input into terms at pattern matches. The set of regular expression features it
86
supports is more limited than the <<analysis-pattern-tokenizer,`pattern`>>

0 commit comments

Comments
 (0)