Skip to content

[DOCS] Describe setup for monitoring logs #42655

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 20, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
187 changes: 187 additions & 0 deletions docs/reference/monitoring/configuring-filebeat.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,187 @@
[role="xpack"]
[testenv="basic"]
[[configuring-filebeat]]
=== Collecting {es} log data with {filebeat}

[subs="attributes"]
++++
<titleabbrev>Collecting log data with {filebeat}</titleabbrev>
++++

You can use {filebeat} to monitor the {es} log files, collect log events, and
ship them to the monitoring cluster. Your recent logs are visible on the
*Monitoring* page in {kib}.

//NOTE: The tagged regions are re-used in the Stack Overview.

. Verify that {es} is running and that the monitoring cluster is ready to
receive data from {filebeat}.
+
--
TIP: In production environments, we strongly recommend using a separate cluster
(referred to as the _monitoring cluster_) to store the data. Using a separate
monitoring cluster prevents production cluster outages from impacting your
ability to access your monitoring data. It also prevents monitoring activities
from impacting the performance of your production cluster. See
{stack-ov}/monitoring-production.html[Monitoring in a production environment].

--

. Enable the collection of monitoring data on your cluster.
+
--
include::configuring-metricbeat.asciidoc[tag=enable-collection]

For more information, see <<monitoring-settings>> and <<cluster-update-settings>>.
--

. Identify which logs you want to monitor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we include any guidance on whether to point FIlebeat at unstructured versus structured ES logs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be more clear, this is about whether or not we should explicitly tell users to use the *.json logs versus the free-text logs. However, I am not sure if there is an advantage from the Filebeat perspective. @ycombinator do you know?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, @cachedout.

We should require users to use *.json logs (as opposed to plaintext logs). The JSON logs are the only ones guaranteed to contain the cluster_uuid field, without which the log lines won't be shown in the correct context in the Stack Monitoring UI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification! I've added tips to the steps related to choosing the logs and configuring the filebeat module.

+
--
The {filebeat} {es} module can handle
{stack-ov}/audit-log-output.html[audit logs],
{ref}/logging.html#deprecation-logging[deprecation logs],
{ref}/gc-logging.html[gc logs], {ref}/logging.html[server logs], and
{ref}/index-modules-slowlog.html[slow logs].
For more information about the location of your {es} logs, see the
{ref}/path-settings.html[path.logs] setting.

IMPORTANT: If there are both structured (`*.json`) and unstructured (plain text)
versions of the logs, you must use the structured logs. Otherwise, they might
not appear in the appropriate context in {kib}.

--

. {filebeat-ref}/filebeat-installation.html[Install {filebeat}] on the {es}
nodes that contain logs that you want to monitor.

. Identify where to send the log data.
+
--
// tag::output-elasticsearch[]
For example, specify {es} output information for your monitoring cluster in
the {filebeat} configuration file (`filebeat.yml`):

[source,yaml]
----------------------------------
output.elasticsearch:
# Array of hosts to connect to.
hosts: ["http://es-mon-1:9200", "http://es-mon2:9200"] <1>

# Optional protocol and basic auth credentials.
#protocol: "https"
#username: "elastic"
#password: "changeme"
----------------------------------
<1> In this example, the data is stored on a monitoring cluster with nodes
`es-mon-1` and `es-mon-2`.

If you configured the monitoring cluster to use encrypted communications, you
must access it via HTTPS. For example, use a `hosts` setting like
`https://es-mon-1:9200`.

IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth to consider to add a link to ingest pipelines to have an explanation at hand.
https://www.elastic.co/guide/en/elasticsearch/client/net-api/6.x/pipelines.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I added a link to the Elasticsearch ingest node page.

cluster that stores the monitoring data must have at least one
<<ingest,ingest node>>.

If {es} {security-features} are enabled on the monitoring cluster, you must
provide a valid user ID and password so that {filebeat} can send metrics
successfully.

For more information about these configuration options, see
{filebeat-ref}/elasticsearch-output.html[Configure the {es} output].
// end::output-elasticsearch[]
--

. Optional: Identify where to visualize the data.
+
--
// tag::setup-kibana[]
{filebeat} provides example {kib} dashboards, visualizations and searches. To
load the dashboards into the appropriate {kib} instance, specify the
`setup.kibana` information in the {filebeat} configuration file
(`filebeat.yml`) on each node:

[source,yaml]
----------------------------------
setup.kibana:
host: "localhost:5601"
#username: "my_kibana_user"
#password: "YOUR_PASSWORD"
----------------------------------

TIP: In production environments, we strongly recommend using a dedicated {kib}
instance for your monitoring cluster.

If {security-features} are enabled, you must provide a valid user ID and
password so that {filebeat} can connect to {kib}:

.. Create a user on the monitoring cluster that has the
{stack-ov}/built-in-roles.html[`kibana_user` built-in role] or equivalent
privileges.

.. Add the `username` and `password` settings to the {es} output information in
the {filebeat} configuration file. The example shows a hard-coded password, but
you should store sensitive values in the
{filebeat-ref}/keystore.html[secrets keystore].

See {filebeat-ref}/setup-kibana-endpoint.html[Configure the {kib} endpoint].

// end::setup-kibana[]
--

. Enable the {es} module and set up the initial {filebeat} environment on each
node.
+
--
// tag::enable-es-module[]
For example:

["source","sh",subs="attributes,callouts"]
----------------------------------------------------------------------
filebeat modules enable elasticsearch
filebeat setup -e
----------------------------------------------------------------------

For more information, see
{filebeat-ref}/filebeat-module-elasticsearch.html[{es} module].

// end::enable-es-module[]
--

. Configure the {es} module in {filebeat} on each node.
+
--
// tag::configure-es-module[]
If the logs that you want to monitor aren't in the default location, set the
appropriate path variables in the `modules.d/elasticsearch.yml` file. See
{filebeat-ref}/filebeat-module-elasticsearch.html#configuring-elasticsearch-module[Configure the {es} module].

IMPORTANT: If there are JSON logs, configure the `var.paths` settings to point
to them instead of the plain text logs.

// end::configure-es-module[]
--

. {filebeat-ref}/filebeat-starting.html[Start {filebeat}] on each node.
+
--
NOTE: Depending on how you’ve installed {filebeat}, you might see errors related
to file ownership or permissions when you try to run {filebeat} modules. See
{beats-ref}/config-file-permissions.html[Config file ownership and permissions].

--

. Check whether the appropriate indices exist on the monitoring cluster.
+
--
For example, use the {ref}/cat-indices.html[cat indices] command to verify
that there are new `filebeat-*` indices.

TIP: If you want to use the *Monitoring* UI in {kib}, there must also be
`.monitoring-*` indices. Those indices are generated when you collect metrics
about {stack} products. For example, see <<configuring-metricbeat>>.

--

. {kibana-ref}/monitoring-data.html[View the monitoring data in {kib}].
3 changes: 3 additions & 0 deletions docs/reference/monitoring/configuring-monitoring.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,12 @@ methods to collect metrics about {es}:
* <<collecting-monitoring-data>>
* <<configuring-metricbeat>>

You can also <<configuring-filebeat,use {filebeat} to collect {es} logs>>.

To learn about monitoring in general, see
{stack-ov}/xpack-monitoring.html[Monitoring the {stack}].

include::collecting-monitoring-data.asciidoc[]
include::configuring-metricbeat.asciidoc[]
include::configuring-filebeat.asciidoc[]
include::indices.asciidoc[]