File tree 1 file changed +5
-6
lines changed
1 file changed +5
-6
lines changed Original file line number Diff line number Diff line change @@ -15,7 +15,7 @@ The `nori` analyzer consists of the following tokenizer and token filters:
15
15
16
16
* <<analysis-nori-tokenizer,`nori_tokenizer`>>
17
17
* <<analysis-nori-speech,`nori_part_of_speech`>> token filter
18
- * <<analysis-nori-reading ,`nori_readingform`>> token filter
18
+ * <<analysis-nori-readingform ,`nori_readingform`>> token filter
19
19
* {ref}/analysis-lowercase-tokenfilter.html[`lowercase`] token filter
20
20
21
21
It supports the `decompound_mode` and `user_dictionary` settings from
@@ -379,20 +379,20 @@ PUT nori_sample
379
379
GET nori_sample/_analyze
380
380
{
381
381
"analyzer": "my_analyzer",
382
- "text": "鄕歌" <1>
382
+ "text": "鄕歌" <1>
383
383
}
384
384
--------------------------------------------------
385
385
// CONSOLE
386
386
387
- <1> Hyangga
387
+ <1> A token written in Hanja: Hyangga
388
388
389
389
Which responds with:
390
390
391
391
[source,js]
392
392
--------------------------------------------------
393
393
{
394
394
"tokens" : [ {
395
- "token" : "향가", <2 >
395
+ "token" : "향가", <1 >
396
396
"start_offset" : 0,
397
397
"end_offset" : 2,
398
398
"type" : "word",
@@ -402,5 +402,4 @@ Which responds with:
402
402
--------------------------------------------------
403
403
// TESTRESPONSE
404
404
405
- <1> A token written in Hanja.
406
- <2> The Hanja form is replaced by the Hangul translation.
405
+ <1> The Hanja form is replaced by the Hangul translation.
You can’t perform that action at this time.
0 commit comments