Skip to content

Commit f3d4966

Browse files
committed
Docs: Clean up for asciidoctor (#1275)
This makes three changes in preparation for switching the docs to Asciidoctor: 1. Fixes a broken link. As a side effect this fixes a missing emphasis in Asciidoctor that was caused by parsing issues with the `_` in the old link. 2. Fixes an `added` macro that renders "funny" in Asciidoctor. 3. Replace a tab in a code example with spaces. AsciiDoc was doing this automatically but Asciidoctor preserves the tab. We don't need the tab.
1 parent ca79e70 commit f3d4966

File tree

3 files changed

+6
-2
lines changed

3 files changed

+6
-2
lines changed

docs/src/reference/asciidoc/core/configuration.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -441,12 +441,16 @@ added[2.1]
441441

442442
added[2.2]
443443
`es.net.proxy.https.host`:: Https proxy host name
444+
444445
added[2.2]
445446
`es.net.proxy.https.port`:: Https proxy port
447+
446448
added[2.2]
447449
`es.net.proxy.https.user`:: Https proxy user name
450+
448451
added[2.2]
449452
`es.net.proxy.https.pass`:: Https proxy password
453+
450454
added[2.2]
451455
`es.net.proxy.https.use.system.props`(default yes):: Whether the use the system Https proxy properties (namely `https.proxyHost` and `https.proxyPort`) or not
452456

docs/src/reference/asciidoc/core/pig.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ For example:
164164
[source,sql]
165165
----
166166
STORE B INTO '...' USING org.elasticsearch.hadoop.pig.EsStorage(
167-
'es.mapping.names=date:@timestamp, uRL:url') <1>
167+
'es.mapping.names=date:@timestamp, uRL:url') <1>
168168
----
169169

170170
<1> Pig column `date` mapped in {es} to `@timestamp`; Pig column `uRL` mapped in {es} to `url`

docs/src/reference/asciidoc/core/spark.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ saveToEs(javaRDD, "my-collection/{media_type}"); <1>
295295
[[spark-write-meta]]
296296
==== Handling document metadata
297297

298-
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
298+
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
299299
In other words, for ++RDD++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.
300300

301301
The metadata is described through the +Metadata+ Java http://docs.oracle.com/javase/tutorial/java/javaOO/enum.html[enum] within +org.elasticsearch.spark.rdd+ package which identifies its type - +id+, +ttl+, +version+, etc...

0 commit comments

Comments
 (0)