Skip to content

Commit 5783b63

Browse files
authored
[DOCS] Fix headings for simple analyzer docs (#58910) (#58919)
1 parent 0a1ef55 commit 5783b63

File tree

1 file changed

+31
-36
lines changed

1 file changed

+31
-36
lines changed

docs/reference/analysis/analyzers/simple-analyzer.asciidoc

Lines changed: 31 additions & 36 deletions
Original file line numberDiff line numberDiff line change
@@ -4,25 +4,25 @@
44
<titleabbrev>Simple</titleabbrev>
55
++++
66

7-
The `simple` analyzer breaks text into terms whenever it encounters a
8-
character which is not a letter. All terms are lower cased.
7+
The `simple` analyzer breaks text into tokens at any non-letter character, such
8+
as numbers, spaces, hyphens and apostrophes, discards non-letter characters,
9+
and changes uppercase to lowercase.
910

10-
[float]
11-
=== Example output
11+
[[analysis-simple-analyzer-ex]]
12+
==== Example
1213

1314
[source,console]
14-
---------------------------
15+
----
1516
POST _analyze
1617
{
1718
"analyzer": "simple",
1819
"text": "The 2 QUICK Brown-Foxes jumped over the lazy dog's bone."
1920
}
20-
---------------------------
21-
22-
/////////////////////
21+
----
2322

23+
////
2424
[source,console-result]
25-
----------------------------
25+
----
2626
{
2727
"tokens": [
2828
{
@@ -104,52 +104,47 @@ POST _analyze
104104
}
105105
]
106106
}
107-
----------------------------
108-
109-
/////////////////////
110-
107+
----
108+
////
111109

112-
The above sentence would produce the following terms:
110+
The `simple` analyzer parses the sentence and produces the following
111+
tokens:
113112

114113
[source,text]
115-
---------------------------
114+
----
116115
[ the, quick, brown, foxes, jumped, over, the, lazy, dog, s, bone ]
117-
---------------------------
116+
----
118117

119-
[float]
120-
=== Configuration
118+
[[analysis-simple-analyzer-definition]]
119+
==== Definition
121120

122-
The `simple` analyzer is not configurable.
123-
124-
[float]
125-
=== Definition
126-
127-
The `simple` analzyer consists of:
121+
The `simple` analyzer is defined by one tokenizer:
128122

129123
Tokenizer::
130-
* <<analysis-lowercase-tokenizer,Lower Case Tokenizer>>
124+
* <<analysis-lowercase-tokenizer, Lowercase Tokenizer>>
125+
126+
[[analysis-simple-analyzer-customize]]
127+
==== Customize
131128

132-
If you need to customize the `simple` analyzer then you need to recreate
133-
it as a `custom` analyzer and modify it, usually by adding token filters.
134-
This would recreate the built-in `simple` analyzer and you can use it as
135-
a starting point for further customization:
129+
To customize the `simple` analyzer, duplicate it to create the basis for
130+
a custom analyzer. This custom analyzer can be modified as required, usually by
131+
adding token filters.
136132

137133
[source,console]
138-
----------------------------------------------------
139-
PUT /simple_example
134+
----
135+
PUT /my_index
140136
{
141137
"settings": {
142138
"analysis": {
143139
"analyzer": {
144-
"rebuilt_simple": {
140+
"my_custom_simple_analyzer": {
145141
"tokenizer": "lowercase",
146-
"filter": [ <1>
142+
"filter": [ <1>
147143
]
148144
}
149145
}
150146
}
151147
}
152148
}
153-
----------------------------------------------------
154-
// TEST[s/\n$/\nstartyaml\n - compare_analyzers: {index: simple_example, first: simple, second: rebuilt_simple}\nendyaml\n/]
155-
<1> You'd add any token filters here.
149+
----
150+
<1> Add token filters here.

0 commit comments

Comments
 (0)