Skip to content

[DOCS] Describe setup for monitoring logs #42655

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Jun 20, 2019
Merged

Conversation

lcawl
Copy link
Contributor

@lcawl lcawl commented May 28, 2019

Related to elastic/stack-docs#348

This PR adds documentation related to configuring Filebeat to collect and send log data to the monitoring cluster.

@elasticmachine
Copy link
Collaborator

Pinging @elastic/es-core-features

@lcawl lcawl changed the title [DOCS] Adds placeholders for log monitoring config [DOCS] Describe setup for monitoring logs May 28, 2019
@lcawl lcawl marked this pull request as ready for review May 30, 2019 01:14
Copy link
Contributor

@chrisronline chrisronline left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks great so far! I added one comment.

One thing to consider is that technically users can configure filebeat to the production cluster and setup CCS and the monitoring UI will be able to read those indices. Do we want to include this in the docs?

For more information, see <<monitoring-settings>> and <<cluster-update-settings>>.
--

. Optional: Collect metrics about {es}. See <<configuring-metricbeat>> or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this an attempt to convert users over to using Metricbeat (which I'm totally for btw)? I feel it's a bit of place considering this is a guide around filebeat, and not metricbeat. cc @elastic/stack-monitoring for thoughts

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do like the "cross sell" here but I also found it a bit out-of-place as step 3 while setting up logs collection. How about making this a note or tip at the bottom of the page?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had it in my brain somehow that the Monitoring UI wouldn't show up on the monitoring cluster until the .monitoring* indices existed so that's why I tested it that way. And it seemed to me that the Filebeat output went to filebeat* indices. i.e. I needed to do something to get other metrics onto the monitoring cluster before any of these log-related pieces will appear. I'll retest without sending any other monitoring data to the monitoring cluster to see if I was wrong about all that.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. There are steps required to see logs within the Monitoring UI that are separate from these filebeat steps, but maybe we can just link off to some enabling monitoring doc, or leave this as-is, but add a little more explanation of why it's necessary?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the feedback. I've moved this information to the second-last step.

. Optional: Collect metrics about {es}. See <<configuring-metricbeat>> or
<<collecting-monitoring-data>>.

. Identify which logs you want to monitor.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we include any guidance on whether to point FIlebeat at unstructured versus structured ES logs?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To be more clear, this is about whether or not we should explicitly tell users to use the *.json logs versus the free-text logs. However, I am not sure if there is an advantage from the Filebeat perspective. @ycombinator do you know?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, @cachedout.

We should require users to use *.json logs (as opposed to plaintext logs). The JSON logs are the only ones guaranteed to contain the cluster_uuid field, without which the log lines won't be shown in the correct context in the Stack Monitoring UI.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the clarification! I've added tips to the steps related to choosing the logs and configuring the filebeat module.

@chrisronline
Copy link
Contributor

Also, there is currently a limit of 10 logs visible in the Monitoring UI, but this is actually configurable via xpack.monitoring.elasticsearch.logFetchCount. It is configurable up to 50 and I'm guessing we should document this?

Copy link
Contributor

@szabosteve szabosteve left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have only three minor remarks, otherwise, it is LGTM.

Edit: two remarks. One was inaccurate.

++++

You can use {filebeat} to monitor the {es} log files, collect log events, and
ship them to the monitoring cluster. In 7.2 and later, your recent logs are
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this availability date information is really necessary? This page in this form will only exist in 7.2 and later documentation, right? So, it might be redundant to add this information.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we've included that in monitoring instructions in the past because your monitoring cluster can be at a different version.release than your production cluster. But I think you're right that it's not really necessary.

must access it via HTTPS. For example, use a `hosts` setting like
`https://es-mon-1:9200`.

IMPORTANT: The {es} {monitor-features} use ingest pipelines, therefore the
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be worth to consider to add a link to ingest pipelines to have an explanation at hand.
https://www.elastic.co/guide/en/elasticsearch/client/net-api/6.x/pipelines.html

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, I added a link to the Elasticsearch ingest node page.

@jakelandis jakelandis removed the v7.2.0 label Jun 17, 2019
@lcawl
Copy link
Contributor Author

lcawl commented Jun 17, 2019

@elasticmachine update branch

@lcawl
Copy link
Contributor Author

lcawl commented Jun 17, 2019

One thing to consider is that technically users can configure filebeat to the production cluster and setup CCS and the monitoring UI will be able to read those indices. Do we want to include this in the docs?

@chrisronline Unless I'm mistaken, this seems like a configuration method that is broader than just the Filebeat scenario. Therefore, I've created elastic/stack-docs#378 to follow up on those instructions.

@lcawl
Copy link
Contributor Author

lcawl commented Jun 17, 2019

Also, there is currently a limit of 10 logs visible in the Monitoring UI, but this is actually configurable via xpack.monitoring.elasticsearch.logFetchCount. It is configurable up to 50 and I'm guessing we should document this?

I think this must be a Kibana setting, so I've opened a PR to add it there: elastic/kibana#39139

@lcawl lcawl merged commit 2e24f09 into elastic:master Jun 20, 2019
@lcawl lcawl deleted the monitor-logs branch June 20, 2019 14:34
@lcawl lcawl added v7.2.0 and removed v7.2.1 labels Jun 20, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

7 participants