@@ -16,8 +16,8 @@ search and visualize the data by using {kib}. After you get the basic setup
16
16
working, you add {ls} for additional parsing.
17
17
18
18
To get started, you can install the {stack} on a single VM or even on your
19
- laptop.
20
-
19
+ laptop.
20
+
21
21
IMPORTANT: Implementing security is a critical step in setting up the {stack}.
22
22
To get up and running quickly with a sample installation, you skip those steps
23
23
right now. Before sending sensitive data across the network, make sure you
@@ -45,6 +45,11 @@ distributed storage, search, and analytics engine. It can be used for many
45
45
purposes, but one context where it excels is indexing streams of semi-structured
46
46
data, such as logs or decoded network packets.
47
47
48
+ Elasticsearch can be run on your own hardware or using our hosted
49
+ Elasticsearch Service on https://www.elastic.co/cloud[Elastic Cloud], which is
50
+ available on AWS and GCP. You can
51
+ https://www.elastic.co/cloud/elasticsearch-service/signup[try out the hosted service] for free.
52
+
48
53
To download and install {es}, open a terminal window and use the commands that
49
54
work with your system (<<deb, deb>> for Debian/Ubuntu, <<rpm, rpm>> for
50
55
Redhat/Centos/Fedora, <<mac, mac>> for OS X, and <<win, win>> for Windows):
@@ -195,7 +200,10 @@ view, and interact with data stored in {es} indices. You can easily perform
195
200
advanced data analysis and visualize your data in a variety of charts, tables,
196
201
and maps.
197
202
198
- To get started, we recommend that you install {kib} on the same server as {es},
203
+ If you are running our hosted Elasticsearch Service on https://www.elastic.co/cloud[Elastic Cloud],
204
+ then Kibana can be enabled with the https://www.elastic.co/guide/en/cloud/current/ec-enable-kibana.html[flick of a switch].
205
+
206
+ Otherwise, we recommend that you install {kib} on the same server as {es},
199
207
but it is not required. If you install the products on different servers, you'll
200
208
need to change the URL (IP:PORT) of the {es} server in the {kib} configuration
201
209
file, `config/kibana.yml`, before starting {kib}.
@@ -420,7 +428,7 @@ https://www.elastic.co/downloads/beats[{beats} download] page.
420
428
421
429
{metricbeat} provides pre-built modules that you can use to rapidly implement
422
430
and deploy a system monitoring solution, complete with sample dashboards and
423
- data visualizations, in about 5 minutes.
431
+ data visualizations, in about 5 minutes.
424
432
425
433
In this section, you learn how to run the `system` module to collect metrics
426
434
from the operating system and services running on your server. The system module
@@ -507,7 +515,7 @@ PS C:\Program Files\Metricbeat> Start-Service metricbeat
507
515
----
508
516
509
517
510
- {metricbeat} runs and starts sending system metrics to {es}.
518
+ {metricbeat} runs and starts sending system metrics to {es}.
511
519
512
520
[float]
513
521
[[visualize-system-metrics]]
@@ -614,7 +622,7 @@ https://www.elastic.co/downloads/logstash[{ls} download] page.
614
622
615
623
. Extract the contents of the zip file to a directory on your computer, for
616
624
example, `C:\Program Files`. Use a short path (fewer than 30 characters) to
617
- avoid running into file path length limitations on Windows.
625
+ avoid running into file path length limitations on Windows.
618
626
619
627
endif::[]
620
628
@@ -634,8 +642,8 @@ that listens for {beats} input and sends the received events to the {es} output.
634
642
635
643
To configure {ls}:
636
644
637
- . Create a new {ls} pipeline configuration file called `demo-metrics-pipeline.conf`.
638
- If you installed {ls} as a deb or rpm package, create the file in the {ls}
645
+ . Create a new {ls} pipeline configuration file called `demo-metrics-pipeline.conf`.
646
+ If you installed {ls} as a deb or rpm package, create the file in the {ls}
639
647
`config` directory. The file must contain:
640
648
+
641
649
--
@@ -672,7 +680,7 @@ output {
672
680
+
673
681
When you start {ls} with this pipeline configuration, {beats} events are routed
674
682
through {ls}, where you have full access to {ls} capabilities for collecting,
675
- enriching, and transforming data.
683
+ enriching, and transforming data.
676
684
677
685
[float]
678
686
==== Start {ls}
@@ -708,10 +716,10 @@ sudo service logstash start
708
716
bin\logstash.bat -f demo-metrics-pipeline.conf
709
717
----------------------------------------------------------------------
710
718
711
- TIP: If you receive JVM error messages, check your Java version as shown in
719
+ TIP: If you receive JVM error messages, check your Java version as shown in
712
720
{logstash-ref}/installing-logstash.html[Installing {ls}].
713
721
714
- {ls} starts listening for events from the {beats} input. Next you need to
722
+ {ls} starts listening for events from the {beats} input. Next you need to
715
723
configure {metricbeat} to send events to {ls}.
716
724
717
725
[float]
@@ -722,7 +730,7 @@ configure {metricbeat} to send events to {ls}.
722
730
the {metricbeat} install directory, or `/etc/metricbeat` for rpm and deb.
723
731
724
732
Disable the `output.elasticsearch` section by commenting it out, then enable
725
- the `output.logstash` section by uncommenting it:
733
+ the `output.logstash` section by uncommenting it:
726
734
727
735
[source,yaml]
728
736
----
@@ -774,7 +782,7 @@ documentation.
774
782
775
783
To extract the path, add the following Grok filter between the input and output
776
784
sections in the {ls} config file that you created earlier:
777
-
785
+
778
786
[source,ruby]
779
787
----
780
788
filter {
0 commit comments