Skip to content

Commit 56cb357

Browse files
spinscalejrodewig
authored andcommitted
[DOCS] Convert main README to asciidoc (#50303)
Converts the main README file to asciidoc. Also includes the following changes: * Mentioning Slack instead of IRC * Removed mentioning of native java API, HTTP should be used * Removed java as a installed requirement
1 parent 4fef422 commit 56cb357

File tree

1 file changed

+49
-54
lines changed

1 file changed

+49
-54
lines changed
+49-54
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,8 @@
1-
h1. Elasticsearch
1+
= Elasticsearch
22

3-
h2. A Distributed RESTful Search Engine
3+
== A Distributed RESTful Search Engine
44

5-
h3. "https://www.elastic.co/products/elasticsearch":https://www.elastic.co/products/elasticsearch
5+
=== https://www.elastic.co/products/elasticsearch[https://www.elastic.co/products/elasticsearch]
66

77
Elasticsearch is a distributed RESTful search engine built for the cloud. Features include:
88

@@ -15,39 +15,34 @@ Elasticsearch is a distributed RESTful search engine built for the cloud. Featur
1515
** Index level configuration (number of shards, index storage, ...).
1616
* Various set of APIs
1717
** HTTP RESTful API
18-
** Native Java API.
1918
** All APIs perform automatic node operation rerouting.
2019
* Document oriented
2120
** No need for upfront schema definition.
2221
** Schema can be defined for customization of the indexing process.
2322
* Reliable, Asynchronous Write Behind for long term persistency.
2423
* (Near) Real Time Search.
25-
* Built on top of Lucene
24+
* Built on top of Apache Lucene
2625
** Each shard is a fully functional Lucene index
2726
** All the power of Lucene easily exposed through simple configuration / plugins.
2827
* Per operation consistency
2928
** Single document level operations are atomic, consistent, isolated and durable.
3029

31-
h2. Getting Started
30+
== Getting Started
3231

3332
First of all, DON'T PANIC. It will take 5 minutes to get the gist of what Elasticsearch is all about.
3433

35-
h3. Requirements
34+
=== Installation
3635

37-
You need to have a recent version of Java installed. See the "Setup":http://www.elastic.co/guide/en/elasticsearch/reference/current/setup.html#jvm-version page for more information.
38-
39-
h3. Installation
40-
41-
* "Download":https://www.elastic.co/downloads/elasticsearch and unzip the Elasticsearch official distribution.
42-
* Run @bin/elasticsearch@ on unix, or @bin\elasticsearch.bat@ on windows.
43-
* Run @curl -X GET http://localhost:9200/@.
36+
* https://www.elastic.co/downloads/elasticsearch[Download] and unpack the Elasticsearch official distribution.
37+
* Run `bin/elasticsearch` on Linux or macOS. Run `bin\elasticsearch.bat` on Windows.
38+
* Run `curl -X GET http://localhost:9200/`.
4439
* Start more servers ...
4540

46-
h3. Indexing
41+
=== Indexing
4742

48-
Let's try and index some twitter like information. First, let's index some tweets (the @twitter@ index will be created automatically):
43+
Let's try and index some twitter like information. First, let's index some tweets (the `twitter` index will be created automatically):
4944

50-
<pre>
45+
----
5146
curl -XPUT 'http://localhost:9200/twitter/_doc/1?pretty' -H 'Content-Type: application/json' -d '
5247
{
5348
"user": "kimchy",
@@ -68,50 +63,50 @@ curl -XPUT 'http://localhost:9200/twitter/_doc/3?pretty' -H 'Content-Type: appli
6863
"post_date": "2010-01-15T01:46:38",
6964
"message": "Building the site, should be kewl"
7065
}'
71-
</pre>
66+
----
7267

7368
Now, let's see if the information was added by GETting it:
7469

75-
<pre>
70+
----
7671
curl -XGET 'http://localhost:9200/twitter/_doc/1?pretty=true'
7772
curl -XGET 'http://localhost:9200/twitter/_doc/2?pretty=true'
7873
curl -XGET 'http://localhost:9200/twitter/_doc/3?pretty=true'
79-
</pre>
74+
----
8075

81-
h3. Searching
76+
=== Searching
8277

8378
Mmm search..., shouldn't it be elastic?
84-
Let's find all the tweets that @kimchy@ posted:
79+
Let's find all the tweets that `kimchy` posted:
8580

86-
<pre>
81+
----
8782
curl -XGET 'http://localhost:9200/twitter/_search?q=user:kimchy&pretty=true'
88-
</pre>
83+
----
8984

9085
We can also use the JSON query language Elasticsearch provides instead of a query string:
9186

92-
<pre>
87+
----
9388
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
9489
{
9590
"query" : {
9691
"match" : { "user": "kimchy" }
9792
}
9893
}'
99-
</pre>
94+
----
10095

101-
Just for kicks, let's get all the documents stored (we should see the tweet from @elastic@ as well):
96+
Just for kicks, let's get all the documents stored (we should see the tweet from `elastic` as well):
10297

103-
<pre>
98+
----
10499
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
105100
{
106101
"query" : {
107102
"match_all" : {}
108103
}
109104
}'
110-
</pre>
105+
----
111106

112-
We can also do range search (the @post_date@ was automatically identified as date)
107+
We can also do range search (the `post_date` was automatically identified as date)
113108

114-
<pre>
109+
----
115110
curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type: application/json' -d '
116111
{
117112
"query" : {
@@ -120,19 +115,19 @@ curl -XGET 'http://localhost:9200/twitter/_search?pretty=true' -H 'Content-Type:
120115
}
121116
}
122117
}'
123-
</pre>
118+
----
124119

125120
There are many more options to perform search, after all, it's a search product no? All the familiar Lucene queries are available through the JSON query language, or through the query parser.
126121

127-
h3. Multi Tenant and Indices
122+
=== Multi Tenant and Indices
128123

129124
Man, that twitter index might get big (in this case, index size == valuation). Let's see if we can structure our twitter system a bit differently in order to support such large amounts of data.
130125

131-
Elasticsearch supports multiple indices. In the previous example we used an index called @twitter@ that stored tweets for every user.
126+
Elasticsearch supports multiple indices. In the previous example we used an index called `twitter` that stored tweets for every user.
132127

133128
Another way to define our simple twitter system is to have a different index per user (note, though that each index has an overhead). Here is the indexing curl's in this case:
134129

135-
<pre>
130+
----
136131
curl -XPUT 'http://localhost:9200/kimchy/_doc/1?pretty' -H 'Content-Type: application/json' -d '
137132
{
138133
"user": "kimchy",
@@ -146,69 +141,69 @@ curl -XPUT 'http://localhost:9200/kimchy/_doc/2?pretty' -H 'Content-Type: applic
146141
"post_date": "2009-11-15T14:12:12",
147142
"message": "Another tweet, will it be indexed?"
148143
}'
149-
</pre>
144+
----
150145

151-
The above will index information into the @kimchy@ index. Each user will get their own special index.
146+
The above will index information into the `kimchy` index. Each user will get their own special index.
152147

153148
Complete control on the index level is allowed. As an example, in the above case, we might want to change from the default 1 shard with 1 replica per index, to 2 shards with 1 replica per index (because this user tweets a lot). Here is how this can be done (the configuration can be in yaml as well):
154149

155-
<pre>
150+
----
156151
curl -XPUT http://localhost:9200/another_user?pretty -H 'Content-Type: application/json' -d '
157152
{
158153
"settings" : {
159154
"index.number_of_shards" : 2,
160155
"index.number_of_replicas" : 1
161156
}
162157
}'
163-
</pre>
158+
----
164159

165160
Search (and similar operations) are multi index aware. This means that we can easily search on more than one
166161
index (twitter user), for example:
167162

168-
<pre>
163+
----
169164
curl -XGET 'http://localhost:9200/kimchy,another_user/_search?pretty=true' -H 'Content-Type: application/json' -d '
170165
{
171166
"query" : {
172167
"match_all" : {}
173168
}
174169
}'
175-
</pre>
170+
----
176171

177172
Or on all the indices:
178173

179-
<pre>
174+
----
180175
curl -XGET 'http://localhost:9200/_search?pretty=true' -H 'Content-Type: application/json' -d '
181176
{
182177
"query" : {
183178
"match_all" : {}
184179
}
185180
}'
186-
</pre>
181+
----
187182

188-
{One liner teaser}: And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).
183+
And the cool part about that? You can easily search on multiple twitter users (indices), with different boost levels per user (index), making social search so much simpler (results from my friends rank higher than results from friends of my friends).
189184

190-
h3. Distributed, Highly Available
185+
=== Distributed, Highly Available
191186

192187
Let's face it, things will fail....
193188

194189
Elasticsearch is a highly available and distributed search engine. Each index is broken down into shards, and each shard can have one or more replicas. By default, an index is created with 1 shard and 1 replica per shard (1/1). There are many topologies that can be used, including 1/10 (improve search performance), or 20/1 (improve indexing performance, with search executed in a map reduce fashion across shards).
195190

196191
In order to play with the distributed nature of Elasticsearch, simply bring more nodes up and shut down nodes. The system will continue to serve requests (make sure you use the correct http port) with the latest data indexed.
197192

198-
h3. Where to go from here?
193+
=== Where to go from here?
199194

200-
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the "elastic.co":http://www.elastic.co/products/elasticsearch website. General questions can be asked on the "Elastic Discourse forum":https://discuss.elastic.co or on IRC on Freenode at "#elasticsearch":https://webchat.freenode.net/#elasticsearch. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.
195+
We have just covered a very small portion of what Elasticsearch is all about. For more information, please refer to the http://www.elastic.co/products/elasticsearch[elastic.co] website. General questions can be asked on the https://discuss.elastic.co[Elastic Forum] or https://ela.st/slack[on Slack]. The Elasticsearch GitHub repository is reserved for bug reports and feature requests only.
201196

202-
h3. Building from Source
197+
=== Building from Source
203198

204-
Elasticsearch uses "Gradle":https://gradle.org for its build system.
199+
Elasticsearch uses https://gradle.org[Gradle] for its build system.
205200

206-
In order to create a distribution, simply run the @./gradlew assemble@ command in the cloned directory.
201+
In order to create a distribution, simply run the `./gradlew assemble` command in the cloned directory.
207202

208-
The distribution for each project will be created under the @build/distributions@ directory in that project.
203+
The distribution for each project will be created under the `build/distributions` directory in that project.
209204

210-
See the "TESTING":TESTING.asciidoc file for more information about running the Elasticsearch test suite.
205+
See the xref:TESTING.asciidoc[TESTING] for more information about running the Elasticsearch test suite.
211206

212-
h3. Upgrading from older Elasticsearch versions
207+
=== Upgrading from older Elasticsearch versions
213208

214-
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our "upgrade documentation":https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html for more details on the upgrade process.
209+
In order to ensure a smooth upgrade process from earlier versions of Elasticsearch, please see our https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-upgrade.html[upgrade documentation] for more details on the upgrade process.

0 commit comments

Comments
 (0)