-
Notifications
You must be signed in to change notification settings - Fork 25.2k
Memory leak upon partial TransportShardBulkAction failure #27300
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
It looks like this relates to log4j's ReusableLogEventFactory, which set MutableLogEvent as thread_local.
I tried adding -Dlog4j2.enable.threadlocals=false to ES's jvm.options. Bulk exception won't retain huge memory since then. So I think this is the way to get around the problem. |
This log4j reference Garbage-free Steady State Logging explains the possibility of memory leaks.
|
* Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes elastic#32537 * Fixes elastic#27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes #32537 * Fixes #27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes elastic#32537 * Fixes elastic#27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes #32537 * Fixes #27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes elastic#32537 * Fixes elastic#27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes #32537 * Fixes #27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes elastic#32537 * Fixes elastic#27300
* LOGGING: Upgrade to Log4J 2.11.1 * Upgrade to `2.11.1` to fix memory leaks in slow logger when logging large requests * This was caused by a bug in Log4J https://issues.apache.org/jira/browse/LOG4J2-2269 and is fixed in `2.11.1` via https://git-wip-us.apache.org/repos/asf?p=logging-log4j2.git;h=9496c0c * Fixes #32537 * Fixes #27300
Uh oh!
There was an error while loading. Please reload this page.
Describe the feature:
Elasticsearch version (
bin/elasticsearch --version
):5.3.2 - 5.6.3
Plugins installed: [None]
JVM version (
java -version
):1.8.0_77-b03
OS version (
uname -a
if on a Unix-like system):Linux SVR14982HW1288 2.6.32-642.6.2.el6.x86_64 #1 SMP Wed Oct 26 06:52:09 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
Description of the problem including expected versus actual behavior:
A production cluster running version 5.3.2 was experiencing very high heap usage after bulk updating some documents. The search load was very light and the cluster was almost idle. Old GC could not reclaim memory even after the bulk requests had ceased running. After some investigation, log4j seems to be the culprit as it holds strong reference to Bulkshardrequests object whenever any exception was thrown during the bulk request execution.
The problem seems quite similar to issues#23798 . But I can reproduce the problem on latest stable version 5.6.3, so the root cause could be different.
Steps to reproduce:
Update a bunch of documents with bulk API, but purposely generate some exceptions, for example, leave a couple of requests with non-existence doc_id or with wrong field type. From ES logs , DEBUG messages pop up showing document missing or mapper parsing exception. The heap used increased significantly depending on a single bulk size. Dump the heap and analyze with MAT, the Bulkshardrequests object is referenced by Log4j's ParameterizedMessage.
The only way I can reclaim the memory is to issue a small bulk request that triggers another exceptions. In which case, the ParameterizedMessage object references the new request with small memory footprint.
Below are the sample heap dump stats for our production cluster:


The text was updated successfully, but these errors were encountered: