-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make sure swapping is disabled for Elasticsearch #206
Comments
Note, at Elasticsearch level we can use Cluster Nodes Stats API to learn if it is swapping |
Relevant OpenShift doc might be here: Disabling Swap Memory. |
OpenShift/Kubernetes don't provide a way to specify the container swappiness and I wouldn't expect that to change soon. We can certainly advise disabling swap on nodes designated for ES to run on. We could provide a diagnostic that queries ES's stats API to warn if it is swapping. |
Couldn't we add the https://www.elastic.co/guide/en/elasticsearch/reference/2.3/setup-configuration.html |
@ewolinetz I think we can but as discussed in official ES doc, this option may not be applied for various reasons, however, if we can make it work in our case then I am all for it! |
@lukas-vlcek, we run with this in our production environment. Works great. What are you referring to when you mention official ES doc? Thanks! |
@portante Check the mlockall section in Memory Settings. There are mentioned some cases where the |
@lukas-vlcek, okay good, that is all about runtime setup, which it seems we can take care of. |
@portante yes, that is why I said above:
But if we can be hit by container level swapping then I am not sure if we can learn about that do anything about it. |
Does this merged PR openshift/openshift-ansible#3884 disables swap for good or only temporarily? @jcantrill I can only see that this PR disables swap ( More on that, I think we should go ahead and explicitly try to disable swap in Elasticsearch configuration. Notice, starting with ES 5.x the memory lock is applied:
If it fails to pass this check ES node does not start. |
@lukas-vlcek, why do we need to worry about swapping for ES if ES can ask the JVM to lock the HEAP in memory to prevent it from swapping? |
@portante we should be using memory lock. No question about this. The only think I am not sure about is how the container kernel host swapping fits in and if we need to worry about this at all. And what we can do about it to make sure it does not kick in. Or in worst case how we can learn that it did kick in... On the other hand that would be probably addressed at different level. So I will open PR to close this ticket (if there haven't been created one already). |
Also, shouldn't we make sure See recommendation here:
|
Setting min and max heep to be the same is good practice in general. It
has just been a minor detail that no one has addressed.
…On Wed, Apr 19, 2017 at 9:15 AM, Lukáš Vlček ***@***.***> wrote:
Also, shouldn't we make sure Xms and Xmx JVM options are not different?
Currently, each can be different
<https://github.com/openshift/origin-aggregated-logging/blob/e2314101a1c53226a4d2563efa7f942807f31537/elasticsearch/run.sh#L66>
.
See recommendation here
<https://www.elastic.co/guide/en/elasticsearch/reference/2.4/setup-configuration.html>
:
It is recommended to set the min and max memory to the same value, and
enable mlockall.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#206 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEVnOOj_Ly8uDwTMhyK_iLlK4HEKowKcks5rxgj9gaJpZM4JXNb4>
.
--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
[email protected]
http://www.redhat.com
|
@lukas-vlcek ideally yes, however that might make it more difficult for ES to start in the scenario where the container is placed on a node where it isn't able to receive its requested/maximum JVM memory |
IMO, that is fine. It means the node is not sized as recommended which
would avoid issues later. Users can always adjust the setting down to a
value which will allow it to fit on the node.
…On Wed, Apr 19, 2017 at 9:26 AM, Eric Wolinetz ***@***.***> wrote:
@lukas-vlcek <https://github.com/lukas-vlcek> ideally yes, however that
might make it more difficult for ES to start in the scenario where the
container is placed on a node where it isn't able to receive its
requested/maximum JVM memory
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#206 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AEVnOHBtpi_WWwX9u4WTL5sR1cpj_wfuks5rxguSgaJpZM4JXNb4>
.
--
--
Jeff Cantrill
Senior Software Engineer, Red Hat Engineering
OpenShift Integration Services
Red Hat, Inc.
*Office*: 703-748-4420 | 866-546-8970 ext. 8162420
[email protected]
http://www.redhat.com
|
Well, the thing is that if we use memory lock and |
It will mean users who just want to try it out or run tests against it on typical (under-resourced) test nodes will have to know up front about the default memory requirement and how to adjust it. That doesn't mean demanding all the memory up front is a bad idea, just something to be aware of. |
@sosiouxme @ewolinetz @jcantrill let's move this discussion to #383 |
#382 has been already closed, so I think we can also close this? |
AFAIK this will be handled at the K8s/OpenShift level by disabling swap for all containers. |
Especially for the performance reasons we need to make sure Elasticsearch is not subject of swapping.
We need to focus on two levels here:
docker run [OPTIONS]
. SeeThe text was updated successfully, but these errors were encountered: