Skip to content

Add Elasticsearch memory_lock #382

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 1 commit into from
Closed

Conversation

lukas-vlcek
Copy link
Contributor

Closes #206

@lukas-vlcek
Copy link
Contributor Author

Notice, this is applicable to Elasticsearch 2.4.0 and newer. In previous ES versions the setting was called bootstrap.mlockall.

See 2.4.0 breaking changes.

Copy link
Contributor

@ewolinetz ewolinetz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, what other repercussions might this have with ES running in a container?

@jcantrill
Copy link
Contributor

@lukas-vlcek Can you please also update the run.sh script to set min and max heap to be the same. This seems like a reasonable PR to append it too.

@lukas-vlcek
Copy link
Contributor Author

lukas-vlcek commented Apr 25, 2017

@jcantrill My take on Xms == Xmx is #389

@lukas-vlcek
Copy link
Contributor Author

@jcantrill @ewolinetz @richm @portante

As far as I know we should be able to rely on disabled swap on the host going forward (via swapoff -a). In such case the question is if we need to bring this change into master. (That is the reason why I created new PR for Xms==Xmx thing)

IMO it does not hurt to bring this into master as long as we are on ES 2.x. However, once we migrate to ES 5.x we may need to revert this change due to bootstrap check. I do not know if that bootstrap check will fail in this case.

@jcantrill
Copy link
Contributor

[test]

@openshift-bot
Copy link

Evaluated for aggregated logging test up to 4744ec8

@richm
Copy link
Contributor

richm commented Apr 25, 2017

Does this change need for elasticsearch to be running in privileged mode?

@portante
Copy link
Contributor

@richm, good catch. I would think so. @lukas-vlcek, did you try this out under openshift somewhere?

@openshift-bot
Copy link

Aggregated Logging Test Results: SUCCESS (https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1321/) (Base Commit: e231410)

@jcantrill
Copy link
Contributor

[merge]

@openshift-bot
Copy link

Evaluated for aggregated logging merge up to 4744ec8

@portante
Copy link
Contributor

@jcantrill, I don't think this should be merged.

@openshift-bot
Copy link

Aggregated Logging Merge Results: FAILURE (https://ci.openshift.redhat.com/jenkins/job/test-origin-aggregated-logging/1325/) (Base Commit: 47f52c2)

@jcantrill
Copy link
Contributor

@portante What is the reason we wish to hold off merging this change?

@richm
Copy link
Contributor

richm commented Apr 27, 2017

@jcantrill Lukáš needs to do more testing to confirm that this actually works, and how to configure the machine (e.g. ulimit, etc.) in order to make it work. I think in ES 2.x it will silently fail if the machine is not set up correctly, and in ES 5.x it will hard fail. @lukas-vlcek

@lukas-vlcek
Copy link
Contributor Author

lukas-vlcek commented Apr 27, 2017

@jcantrill Added "Do not merge" label for now.

We really need to get to the bottom of this and deliver clear documentation (backed by testing) around this setting. When we started looking at this (#206) it was not known OCP would decide to disable swap at the OS level so we need to reflect new situation now.

Even as for assumption that it will fail hard with ES 5.x - I am still not 100% sure about this. There is bootstrap check that will fail if mlockall is required and ES will not be able to lock the memory at startup. However, failed bootstrap checks should be ignored if ES runs in development mode - i.e. "if ES does not bind transport to an external interface" (so does it apply if we run ES in container and bind it to 0.0.0.0?).

@portante
Copy link
Contributor

@jcantrill, I don't think this commit is necessary, given we should be running without swap for Elasticsearch, and we don't want to run a privileged container just so that ES can lock all memory.

@lukas-vlcek
Copy link
Contributor Author

It makes sense to add/improve documentation around setup and use of memory lock setting for ES for users that have reason to use it. But there should be a different ticket opened for this. Closing for now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants