Skip to content

Commit 32ee7d5

Browse files
committed
Docs: Clean up for asciidoctor (#1275)
This makes three changes in preparation for switching the docs to Asciidoctor: 1. Fixes a broken link. As a side effect this fixes a missing emphasis in Asciidoctor that was caused by parsing issues with the `_` in the old link. 2. Fixes an `added` macro that renders "funny" in Asciidoctor. 3. Replace a tab in a code example with spaces. AsciiDoc was doing this automatically but Asciidoctor preserves the tab. We don't need the tab.
1 parent ff91d11 commit 32ee7d5

File tree

3 files changed

+7
-3
lines changed

3 files changed

+7
-3
lines changed

docs/src/reference/asciidoc/core/configuration.adoc

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -504,12 +504,16 @@ added[2.1]
504504

505505
added[2.2]
506506
`es.net.proxy.https.host`:: Https proxy host name
507+
507508
added[2.2]
508509
`es.net.proxy.https.port`:: Https proxy port
510+
509511
added[2.2]
510512
`es.net.proxy.https.user`:: Https proxy user name
513+
511514
added[2.2]
512515
`es.net.proxy.https.pass`:: Https proxy password
516+
513517
added[2.2]
514518
`es.net.proxy.https.use.system.props`(default yes):: Whether the use the system Https proxy properties (namely `https.proxyHost` and `https.proxyPort`) or not
515519

docs/src/reference/asciidoc/core/pig.adoc

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ For example:
164164
[source,sql]
165165
----
166166
STORE B INTO '...' USING org.elasticsearch.hadoop.pig.EsStorage(
167-
'es.mapping.names=date:@timestamp, uRL:url') <1>
167+
'es.mapping.names=date:@timestamp, uRL:url') <1>
168168
----
169169

170170
<1> Pig column `date` mapped in {es} to `@timestamp`; Pig column `uRL` mapped in {es} to `url`

docs/src/reference/asciidoc/core/spark.adoc

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -295,7 +295,7 @@ saveToEs(javaRDD, "my-collection/{media_type}"); <1>
295295
[[spark-write-meta]]
296296
==== Handling document metadata
297297

298-
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
298+
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
299299
In other words, for ++RDD++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.
300300

301301
The metadata is described through the +Metadata+ Java http://docs.oracle.com/javase/tutorial/java/javaOO/enum.html[enum] within +org.elasticsearch.spark.rdd+ package which identifies its type - +id+, +ttl+, +version+, etc...
@@ -924,7 +924,7 @@ jssc.start();
924924
[[spark-streaming-write-meta]]
925925
==== Handling document metadata
926926

927-
{es} allows each document to have its own http://www.elasticsearch.org/guide/en/elasticsearch/guide/current/\_document\_metadata.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
927+
{es} allows each document to have its own {ref}/mapping-fields.html[metadata]. As explained above, through the various <<cfg-mapping, mapping>> options one can customize these parameters so that their values are extracted from their belonging document. Further more, one can even include/exclude what parts of the data are sent back to {es}. In Spark, {eh} extends this functionality allowing metadata to be supplied _outside_ the document itself through the use of http://spark.apache.org/docs/latest/programming-guide.html#working-with-key-value-pairs[_pair_ ++RDD++s].
928928

929929
This is no different in Spark Streaming. For ++DStreams++s containing a key-value tuple, the metadata can be extracted from the key and the value used as the document source.
930930

0 commit comments

Comments
 (0)