Skip to content

Commit 13a05e4

Browse files
committed
[DOCS] Abbreviate token filter titles (#50511)
1 parent c275fc1 commit 13a05e4

28 files changed

+112
-28
lines changed

docs/reference/analysis/tokenfilters/flatten-graph-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-flatten-graph-tokenfilter]]
2-
=== Flatten Graph Token Filter
2+
=== Flatten graph token filter
3+
++++
4+
<titleabbrev>Flatten graph</titleabbrev>
5+
++++
36

47
experimental[This functionality is marked as experimental in Lucene]
58

docs/reference/analysis/tokenfilters/hunspell-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-hunspell-tokenfilter]]
2-
=== Hunspell Token Filter
2+
=== Hunspell token filter
3+
++++
4+
<titleabbrev>Hunspell</titleabbrev>
5+
++++
36

47
Basic support for hunspell stemming. Hunspell dictionaries will be
58
picked up from a dedicated hunspell directory on the filesystem

docs/reference/analysis/tokenfilters/keyword-marker-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-keyword-marker-tokenfilter]]
2-
=== Keyword Marker Token Filter
2+
=== Keyword marker token filter
3+
++++
4+
<titleabbrev>Keyword marker</titleabbrev>
5+
++++
36

47
Protects words from being modified by stemmers. Must be placed before
58
any stemming filters.

docs/reference/analysis/tokenfilters/keyword-repeat-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-keyword-repeat-tokenfilter]]
2-
=== Keyword Repeat Token Filter
2+
=== Keyword repeat token filter
3+
++++
4+
<titleabbrev>Keyword repeat</titleabbrev>
5+
++++
36

47
The `keyword_repeat` token filter Emits each incoming token twice once
58
as keyword and once as a non-keyword to allow an unstemmed version of a

docs/reference/analysis/tokenfilters/kstem-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-kstem-tokenfilter]]
2-
=== KStem Token Filter
2+
=== KStem token filter
3+
++++
4+
<titleabbrev>KStem</titleabbrev>
5+
++++
36

47
The `kstem` token filter is a high performance filter for english. All
58
terms must already be lowercased (use `lowercase` filter) for this

docs/reference/analysis/tokenfilters/minhash-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-minhash-tokenfilter]]
2-
=== MinHash Token Filter
2+
=== MinHash token filter
3+
++++
4+
<titleabbrev>MinHash</titleabbrev>
5+
++++
36

47
The `min_hash` token filter hashes each token of the token stream and divides
58
the resulting hashes into buckets, keeping the lowest-valued hashes per

docs/reference/analysis/tokenfilters/multiplexer-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-multiplexer-tokenfilter]]
2-
=== Multiplexer Token Filter
2+
=== Multiplexer token filter
3+
++++
4+
<titleabbrev>Multiplexer</titleabbrev>
5+
++++
36

47
A token filter of type `multiplexer` will emit multiple tokens at the same position,
58
each version of the token having been run through a different filter. Identical

docs/reference/analysis/tokenfilters/normalization-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-normalization-tokenfilter]]
2-
=== Normalization Token Filter
2+
=== Normalization token filters
3+
++++
4+
<titleabbrev>Normalization</titleabbrev>
5+
++++
36

47
There are several token filters available which try to normalize special
58
characters of a certain language.

docs/reference/analysis/tokenfilters/pattern-capture-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-pattern-capture-tokenfilter]]
2-
=== Pattern Capture Token Filter
2+
=== Pattern capture token filter
3+
++++
4+
<titleabbrev>Pattern capture</titleabbrev>
5+
++++
36

47
The `pattern_capture` token filter, unlike the `pattern` tokenizer,
58
emits a token for every capture group in the regular expression.

docs/reference/analysis/tokenfilters/pattern_replace-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-pattern_replace-tokenfilter]]
2-
=== Pattern Replace Token Filter
2+
=== Pattern replace token filter
3+
++++
4+
<titleabbrev>Pattern replace</titleabbrev>
5+
++++
36

47
The `pattern_replace` token filter allows to easily handle string
58
replacements based on a regular expression. The regular expression is
Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,7 @@
11
[[analysis-phonetic-tokenfilter]]
2-
=== Phonetic Token Filter
2+
=== Phonetic token filter
3+
++++
4+
<titleabbrev>Phonetic</titleabbrev>
5+
++++
36

47
The `phonetic` token filter is provided as the {plugins}/analysis-phonetic.html[`analysis-phonetic`] plugin.

docs/reference/analysis/tokenfilters/porterstem-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-porterstem-tokenfilter]]
2-
=== Porter Stem Token Filter
2+
=== Porter stem token filter
3+
++++
4+
<titleabbrev>Porter stem</titleabbrev>
5+
++++
36

47
A token filter of type `porter_stem` that transforms the token stream as
58
per the Porter stemming algorithm.

docs/reference/analysis/tokenfilters/predicate-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-predicatefilter-tokenfilter]]
2-
=== Predicate Token Filter Script
2+
=== Predicate script token filter
3+
++++
4+
<titleabbrev>Predicate script</titleabbrev>
5+
++++
36

47
The predicate_token_filter token filter takes a predicate script, and removes tokens that do
58
not match the predicate.
Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-remove-duplicates-tokenfilter]]
2-
=== Remove Duplicates Token Filter
2+
=== Remove duplicates token filter
3+
++++
4+
<titleabbrev>Remove duplicates</titleabbrev>
5+
++++
36

47
A token filter of type `remove_duplicates` that drops identical tokens at the
58
same position.
Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,7 @@
11
[[analysis-reverse-tokenfilter]]
2-
=== Reverse Token Filter
2+
=== Reverse token filter
3+
++++
4+
<titleabbrev>Reverse</titleabbrev>
5+
++++
36

47
A token filter of type `reverse` that simply reverses each token.

docs/reference/analysis/tokenfilters/shingle-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-shingle-tokenfilter]]
2-
=== Shingle Token Filter
2+
=== Shingle token filter
3+
++++
4+
<titleabbrev>Shingle</titleabbrev>
5+
++++
36

47
NOTE: Shingles are generally used to help speed up phrase queries. Rather
58
than building filter chains by hand, you may find it easier to use the

docs/reference/analysis/tokenfilters/snowball-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-snowball-tokenfilter]]
2-
=== Snowball Token Filter
2+
=== Snowball token filter
3+
++++
4+
<titleabbrev>Snowball</titleabbrev>
5+
++++
36

47
A filter that stems words using a Snowball-generated stemmer. The
58
`language` parameter controls the stemmer with the following available

docs/reference/analysis/tokenfilters/stemmer-override-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-stemmer-override-tokenfilter]]
2-
=== Stemmer Override Token Filter
2+
=== Stemmer override token filter
3+
++++
4+
<titleabbrev>Stemmer override</titleabbrev>
5+
++++
36

47
Overrides stemming algorithms, by applying a custom mapping, then
58
protecting these terms from being modified by stemmers. Must be placed

docs/reference/analysis/tokenfilters/stemmer-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-stemmer-tokenfilter]]
2-
=== Stemmer Token Filter
2+
=== Stemmer token filter
3+
++++
4+
<titleabbrev>Stemmer</titleabbrev>
5+
++++
36

47
// Adds attribute for the 'minimal_portuguese' stemmer values link.
58
// This link contains ~, which is converted to subscript.

docs/reference/analysis/tokenfilters/stop-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-stop-tokenfilter]]
2-
=== Stop Token Filter
2+
=== Stop token filter
3+
++++
4+
<titleabbrev>Stop</titleabbrev>
5+
++++
36

47
A token filter of type `stop` that removes stop words from token
58
streams.

docs/reference/analysis/tokenfilters/synonym-graph-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-synonym-graph-tokenfilter]]
2-
=== Synonym Graph Token Filter
2+
=== Synonym graph token filter
3+
++++
4+
<titleabbrev>Synonym graph</titleabbrev>
5+
++++
36

47
The `synonym_graph` token filter allows to easily handle synonyms,
58
including multi-word synonyms correctly during the analysis process.

docs/reference/analysis/tokenfilters/synonym-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-synonym-tokenfilter]]
2-
=== Synonym Token Filter
2+
=== Synonym token filter
3+
++++
4+
<titleabbrev>Synonym</titleabbrev>
5+
++++
36

47
The `synonym` token filter allows to easily handle synonyms during the
58
analysis process. Synonyms are configured using a configuration file.
Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,7 @@
11
[[analysis-trim-tokenfilter]]
2-
=== Trim Token Filter
2+
=== Trim token filter
3+
++++
4+
<titleabbrev>Trim</titleabbrev>
5+
++++
36

47
The `trim` token filter trims the whitespace surrounding a token.

docs/reference/analysis/tokenfilters/truncate-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-truncate-tokenfilter]]
2-
=== Truncate Token Filter
2+
=== Truncate token filter
3+
++++
4+
<titleabbrev>Truncate</titleabbrev>
5+
++++
36

47
The `truncate` token filter can be used to truncate tokens into a
58
specific length.

docs/reference/analysis/tokenfilters/unique-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-unique-tokenfilter]]
2-
=== Unique Token Filter
2+
=== Unique token filter
3+
++++
4+
<titleabbrev>Unique</titleabbrev>
5+
++++
36

47
The `unique` token filter can be used to only index unique tokens during
58
analysis. By default it is applied on all the token stream. If
Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-uppercase-tokenfilter]]
2-
=== Uppercase Token Filter
2+
=== Uppercase token filter
3+
++++
4+
<titleabbrev>Uppercase</titleabbrev>
5+
++++
36

47
A token filter of type `uppercase` that normalizes token text to upper
58
case.

docs/reference/analysis/tokenfilters/word-delimiter-graph-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-word-delimiter-graph-tokenfilter]]
2-
=== Word Delimiter Graph Token Filter
2+
=== Word delimiter graph token filter
3+
++++
4+
<titleabbrev>Word delimiter graph</titleabbrev>
5+
++++
36

47
experimental[This functionality is marked as experimental in Lucene]
58

docs/reference/analysis/tokenfilters/word-delimiter-tokenfilter.asciidoc

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,8 @@
11
[[analysis-word-delimiter-tokenfilter]]
2-
=== Word Delimiter Token Filter
2+
=== Word delimiter token filter
3+
++++
4+
<titleabbrev>Word delimiter</titleabbrev>
5+
++++
36

47
Named `word_delimiter`, it Splits words into subwords and performs
58
optional transformations on subword groups. Words are split into

0 commit comments

Comments
 (0)