Skip to content

Dockerized Elasticsearch instance crashes when receiving request #26198

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
algestam opened this issue Aug 14, 2017 · 14 comments
Closed

Dockerized Elasticsearch instance crashes when receiving request #26198

algestam opened this issue Aug 14, 2017 · 14 comments
Assignees
Labels
>bug Pioneer Program :Search/Search Search-related issues that do not fall into other categories

Comments

@algestam
Copy link

algestam commented Aug 14, 2017

Elasticsearch version (bin/elasticsearch --version):

Version: 6.0.0-beta1, Build: 896afa4/2017-08-03T23:14:26.258Z, JVM: 1.8.0_144

Plugins installed: []

JVM version (java -version):

java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

OS version (uname -a if on a Unix-like system):

Linux 816fd3d99829 4.4.0-91-generic #114-Ubuntu SMP Tue Aug 8 11:56:56 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux

Description of the problem including expected versus actual behavior:

I'm running Elasticsearch within a docker container (Docker version 17.05.0-ce, build 89658be). Have used older versions up to Elasticsearch 5.5.1 with the same setup without any problems.

When testing to run Elasticsearch 6.0.0-beta1 I have faced an issue that I haven't seem before. The issue is that Elasticsearch crashes when receiving a request.

When starting up the Elasticsearch container, it starts up and Elasticsearch health status seems ok by looking in the logs as it states Cluster health status changed from [RED] to [YELLOW] and only a single Elasticsearch instance is running.

Running a simple API towards the elasticsearch instance works fine:

user@host:/elasticsearch# curl localhost:9200/
{
  "name" : "dvphuFi",
  "cluster_name" : "elasticsearch-logs",
  "cluster_uuid" : "k8QHY2SRTsyZn8sLo5vNRw",
  "version" : {
    "number" : "6.0.0-beta1",
    "build_hash" : "896afa4",
    "build_date" : "2017-08-03T23:14:26.258Z",
    "build_snapshot" : false,
    "lucene_version" : "7.0.0",
    "minimum_wire_compatibility_version" : "5.6.0",
    "minimum_index_compatibility_version" : "5.0.0"
  },
  "tagline" : "You Know, for Search"
}

However, running any query will cause the Elasticsearch instance to crash:

user@host:/elasticsearch# curl localhost:9200/_search?q=test
curl: (52) Empty reply from server

The following steps was done in order to get the system working again after the error started happening:

  1. Stop, remove and restart the docker image -> Error still occurred
  2. Restarting the Docker deamon (via service docker restart -> Error still occurred
  3. Restarting the Linux host -> Error still occurred
  4. Deleting the Elasticsearch data dir -> System works ok again, however data needs to be re-indexed

More logs will be attached as a comment to this issue.

Below is an excerpt from the logs:

[2017-08-14T11:20:27,152][INFO ][o.e.n.Node               ] [] initializing ...
[2017-08-14T11:20:27,257][INFO ][o.e.e.NodeEnvironment    ] [dvphuFi] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [14.5gb], net total_space [31.3gb], types [ext4]
[2017-08-14T11:20:27,257][INFO ][o.e.e.NodeEnvironment    ] [dvphuFi] heap size [3.8gb], compressed ordinary object pointers [true]
[2017-08-14T11:20:27,591][INFO ][o.e.n.Node               ] node name [dvphuFi] derived from node ID [dvphuFiRT8aEJicgCiXprg]; set [node.name] to override
[2017-08-14T11:20:27,591][INFO ][o.e.n.Node               ] version[6.0.0-beta1], pid[14], build[896afa4/2017-08-03T23:14:26.258Z], OS[Linux/4.4.0-91-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2017-08-14T11:20:27,592][INFO ][o.e.n.Node               ] JVM arguments [-Xms4g, -Xmx4g, -Dlog4j2.disable.jmx=true, -Des.path.home=/elasticsearch, -Des.path.conf=/elasticsearch/config]
[2017-08-14T11:20:27,592][WARN ][o.e.n.Node               ] version [6.0.0-beta1] is a pre-release version of Elasticsearch and is not suitable for production
[2017-08-14T11:20:28,200][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [aggs-matrix-stats]
[2017-08-14T11:20:28,200][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [analysis-common]
[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [ingest-common]
[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-expression]
[2017-08-14T11:20:28,201][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-mustache]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-painless]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [parent-join]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [percolator]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [reindex]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [repository-url]
[2017-08-14T11:20:28,202][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [transport-netty4]
[2017-08-14T11:20:28,203][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [tribe]
[2017-08-14T11:20:28,203][INFO ][o.e.p.PluginsService     ] [dvphuFi] no plugins loaded
[2017-08-14T11:20:29,526][INFO ][o.e.d.DiscoveryModule    ] [dvphuFi] using discovery type [zen]
[2017-08-14T11:20:30,304][INFO ][o.e.n.Node               ] initialized
[2017-08-14T11:20:30,305][INFO ][o.e.n.Node               ] [dvphuFi] starting ...
[2017-08-14T11:20:30,336][INFO ][i.n.u.i.PlatformDependent] Your platform does not provide complete low-level API for accessing direct buffers reliably. Unless explicitly requested, heap buffer will always be preferred to avoid potential system instability.
[2017-08-14T11:20:30,431][INFO ][o.e.t.TransportService   ] [dvphuFi] publish_address {172.18.0.10:9300}, bound_addresses {0.0.0.0:9300}
[2017-08-14T11:20:30,440][INFO ][o.e.b.BootstrapChecks    ] [dvphuFi] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-08-14T11:20:33,486][INFO ][o.e.c.s.MasterService    ] [dvphuFi] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300}
[2017-08-14T11:20:33,492][INFO ][o.e.c.s.ClusterApplierService] [dvphuFi] new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300}, reason: apply cluster state (from master [master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{riLrClxxSlugm8Ks7PMClg}{172.18.0.10}{172.18.0.10:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2017-08-14T11:20:33,526][INFO ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] publish_address {172.18.0.10:9200}, bound_addresses {0.0.0.0:9200}
[2017-08-14T11:20:33,526][INFO ][o.e.n.Node               ] [dvphuFi] started
[2017-08-14T11:20:35,029][INFO ][o.e.g.GatewayService     ] [dvphuFi] recovered [125] indices into cluster_state
[2017-08-14T11:20:45,558][INFO ][o.e.c.r.a.AllocationService] [dvphuFi] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-logs-2017.08.09][2], [logstash-logs-2017.08.09][3]] ...]).
[2017-08-14T11:31:18,825][ERROR][o.e.t.n.Netty4Utils      ] fatal error on the network layer
        at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)
        at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:81)
        at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
        at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
        at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
        at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
        at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
        at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
        at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)
        at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
        at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
        at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
        at java.lang.Thread.run(Thread.java:748)
[2017-08-14T11:31:18,839][WARN ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] caught exception while handling client http traffic, closing connection [id: 0x423a8b89, L:/127.0.0.1:9200 - R:/127.0.0.1:44298]
java.lang.StackOverflowError: null
        at com.carrotsearch.hppc.ObjectObjectHashMap.<init>(ObjectObjectHashMap.java:123) ~[hppc-0.7.1.jar:?]
...
...
...

Full log is attached below.

Steps to reproduce:

Please include a minimal but complete recreation of the problem, including
(e.g.) index creation, mappings, settings, query etc. The easier you make for
us to reproduce it, the more likely that somebody will take the time to look at it.

  1. Start and run Elasticsearch within a Docker container
  2. Run a query against the Elasticsearch instance such as /_search?q=test
  3. Elasticsearch crashes with a stacktrace in the logs (see below)

Provide logs (if relevant):

Logs will be added as a comment.

@algestam
Copy link
Author

Full log:
elasticsearch_log.txt

@jasontedor jasontedor self-assigned this Aug 14, 2017
@jasontedor
Copy link
Member

Can you clarify one thing? It seems that you are using your own Docker image and not the official Docker image provided by Elastic. Is that correct?

@algestam
Copy link
Author

Yes, that's correct. I'm using my own Docker image (which I have used without problems up until ES 5.5.1).

@jasontedor
Copy link
Member

Can you share the Dockerfile?

@algestam
Copy link
Author

Of course!

This is the Dockerfile:

FROM ubuntu:16.04

ENV ELASTICSEARCH_VERSION 6.0.0-beta1

# no tty in this container
ENV DEBIAN_FRONTEND noninteractive

# Update index and install packages
RUN apt-get update && apt-get install -y \
        apt-utils \
        software-properties-common \
        curl \
        nano \
        sudo \
        nmap

# Install JDK 8
RUN echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
        add-apt-repository -y ppa:webupd8team/java && \
        apt-get update && \
        apt-get install -y oracle-java8-installer wget unzip tar && \
        rm -rf /var/lib/apt/lists/* && \
        rm -rf /var/cache/oracle-jdk8-installer

# Define JAVA variables
ENV JAVA_HOME /usr/lib/jvm/java-8-oracle
ENV JAVA /usr/bin/java

# Download and install Elasticsearch
RUN \
  cd / && \
  curl https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-$ELASTICSEARCH_VERSION.tar.gz | \
  tar xzf - && \
  mv elasticsearch-$ELASTICSEARCH_VERSION /elasticsearch

# Define mountable directories.
VOLUME ["/data"]

# Add elasticsearch config files
ADD docker/elasticsearch/config/elasticsearch.yml /elasticsearch/config/
ADD docker/elasticsearch/config/jvm.options /elasticsearch/config/

# Setup limits to enable mlock
ADD docker/elasticsearch/config/limits.conf /etc/security/

# Define working directory.
WORKDIR /elasticsearch

# Create user elasticsearch
RUN groupadd -g 1000 elasticsearch && useradd elasticsearch -u 1000 -g 1000

RUN set -ex && for path in data logs plugins config config/scripts; do \
  mkdir -p "$path"; \
  chown -R elasticsearch:elasticsearch "$path"; \
  done

# Add elasticsearch to path
ENV PATH=$PATH:/elasticsearch/bin

# Add start script
ADD docker/elasticsearch/config/start.sh /

# Define default command.
CMD ["/start.sh"]

# Expose ports.
#   - 9200: HTTP
#   - 9300: transport
EXPOSE 9200
EXPOSE 9300

elasticsearch.yml:

cluster.name: "elasticsearch-logs"
network.host: 0.0.0.0
path.data: /data/elasticsearch
path.logs: /logs/elasticsearch
bootstrap.memory_lock: true
discovery.zen.minimum_master_nodes: 1
node.max_local_storage_nodes: 1

jvm.options:

-Xms4g
-Xmx4g

limits.conf:

elasticsearch    soft    nproc   65535555
elasticsearch    hard    nproc   65553555
elasticsearch    soft    nofile  655355
elasticsearch    hard    nofile  655355
elasticsearch    soft    memlock unlimited
elasticsearch    hard    memlock unlimited

start.sh:

#!/bin/sh

# make sure that data and log dir exists
mkdir -p /data/elasticsearch
mkdir -p /logs/elasticsearch

# ...and that elasticsearch user can write to them
chown -R elasticsearch:elasticsearch /data/elasticsearch
chown -R elasticsearch:elasticsearch /logs/elasticsearch

# provision elasticsearch user
adduser elasticsearch sudo
chown -R elasticsearch /elasticsearch /data
echo '%sudo ALL=(ALL) NOPASSWD:ALL' >> /etc/sudoers

# allow for memlock
ulimit -l unlimited

# setting up java heap size
echo -Xms${JVM_HEAP_GB}g > /elasticsearch/config/jvm.options
echo -Xmx${JVM_HEAP_GB}g >> /elasticsearch/config/jvm.options

# Disabling JVM security manager exceptions
echo -Dlog4j2.disable.jmx=true >> /elasticsearch/config/jvm.options

# run
sudo -E -u elasticsearch /elasticsearch/bin/elasticsearch

@jasontedor
Copy link
Member

This does not reproduce for me. What additional steps can you provide that reliably reproduces this? Additionally, are there any additional lines to the stack trace available?

@jasontedor
Copy link
Member

I think I know what the problem is. Here's the key: the constructor for ObjectObjectHashMap is not recursive, the stack overflow is not occurring on a recursive call, it is just overflowing on what is otherwise a shallow stack. I think that you're running on a system with limited resources, I think that the default thread stack sizes are too small. Please add: -Xss1m to your jvm.options by adding the line:

echo -Xss1m > /elasticsearch/config/jvm.options

to your start.sh.

I think that this is not an Elasticsearch bug and I am going to close this issue. Please let me know either way if this does not resolve the problem that you are encountering. If there is an Elasticsearch bug here, I will reopen the issue.

By the way, I think that you should try to use the entire jvm.options file that we ship with, aside from the heap settings.

@algestam
Copy link
Author

Thanks for taking the time to look at this and for the suggestions on improving the Docker setup! I plan to start using the official images but haven't gotten there yet,

I haven't been able to reproduce this error from scratch again but if I re-start the image with the same data directory as when the bug was reported, the issue happens every time.

I've tried adding the -Xss1m option to the JVM options file but the problem remains the same. Also tried changing the image so that the same jvm.options file is used as the one shipped with elasticsearch is used but the problem remains the same.

When examining the logs I could see that during the first startup after changing to use the jvm.options shipped with ES the log file the were errors stating failed to list shard for shard_started on node that seems to be caused by a missing file (/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st).

Could it be that the ES data directory has become corrupt for some reason (i.e. limited resourced/out of disk/sudden restart) and that the node won't start up correctly after that?

Anyway, loosing existing data and having to re-read everything to a fresh data directory is a solution that works for me in case this would happen again. However, if the problem is caused by the data dir becoming corrupt (for whatever reason) I think it would be better if ES either didn't start up at all or tried to recover the data (if possible). The situation here seems to be that ES starts up but fails when the first request comes in.

Here are logs for the scenario described above:

[2017-08-14T14:01:34,028][INFO ][o.e.n.Node               ] [] initializing ...
[2017-08-14T14:01:34,134][INFO ][o.e.e.NodeEnvironment    ] [dvphuFi] using [1] data paths, mounts [[/data (/dev/sda1)]], net usable_space [14.4gb], net total_space [31.3gb], types [ext4]
[2017-08-14T14:01:34,135][INFO ][o.e.e.NodeEnvironment    ] [dvphuFi] heap size [3.9gb], compressed ordinary object pointers [true]
[2017-08-14T14:01:34,457][INFO ][o.e.n.Node               ] node name [dvphuFi] derived from node ID [dvphuFiRT8aEJicgCiXprg]; set [node.name] to override
[2017-08-14T14:01:34,458][INFO ][o.e.n.Node               ] version[6.0.0-beta1], pid[18], build[896afa4/2017-08-03T23:14:26.258Z], OS[Linux/4.4.0-91-generic/amd64], JVM[Oracle Corporation/Java HotSpot(TM) 64-Bit Server VM/1.8.0_144/25.144-b01]
[2017-08-14T14:01:34,458][INFO ][o.e.n.Node               ] JVM arguments [-Xms4g, -Xmx4g, -XX:+UseConcMarkSweepGC, -XX:CMSInitiatingOccupancyFraction=75, -XX:+UseCMSInitiatingOccupancyOnly, -XX:+AlwaysPreTouch, -Xss1m, -Djava.awt.headless=true, -Dfile.encoding=UTF-8, -Djna.nosys=true, -XX:-OmitStackTraceInFastThrow, -Djdk.io.permissionsUseCanonicalPath=true, -Dio.netty.noUnsafe=true, -Dio.netty.noKeySetOptimization=true, -Dio.netty.recycler.maxCapacityPerThread=0, -Dlog4j.shutdownHookEnabled=false, -Dlog4j2.disable.jmx=true, -XX:+HeapDumpOnOutOfMemoryError, -Dlog4j2.disable.jmx=true, -Des.path.home=/elasticsearch, -Des.path.conf=/elasticsearch/config]
[2017-08-14T14:01:34,458][WARN ][o.e.n.Node               ] version [6.0.0-beta1] is a pre-release version of Elasticsearch and is not suitable for production
[2017-08-14T14:01:35,147][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [aggs-matrix-stats]
[2017-08-14T14:01:35,147][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [analysis-common]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [ingest-common]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-expression]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-mustache]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [lang-painless]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [parent-join]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [percolator]
[2017-08-14T14:01:35,148][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [reindex]
[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [repository-url]
[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [transport-netty4]
[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService     ] [dvphuFi] loaded module [tribe]
[2017-08-14T14:01:35,149][INFO ][o.e.p.PluginsService     ] [dvphuFi] no plugins loaded
[2017-08-14T14:01:36,291][INFO ][o.e.d.DiscoveryModule    ] [dvphuFi] using discovery type [zen]
[2017-08-14T14:01:37,208][INFO ][o.e.n.Node               ] initialized
[2017-08-14T14:01:37,208][INFO ][o.e.n.Node               ] [dvphuFi] starting ...
[2017-08-14T14:01:37,340][INFO ][o.e.t.TransportService   ] [dvphuFi] publish_address {172.18.0.12:9300}, bound_addresses {0.0.0.0:9300}
[2017-08-14T14:01:37,348][INFO ][o.e.b.BootstrapChecks    ] [dvphuFi] bound or publishing to a non-loopback or non-link-local address, enforcing bootstrap checks
[2017-08-14T14:01:40,418][INFO ][o.e.c.s.MasterService    ] [dvphuFi] zen-disco-elected-as-master ([0] nodes joined), reason: new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300}
[2017-08-14T14:01:40,423][INFO ][o.e.c.s.ClusterApplierService] [dvphuFi] new_master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300}, reason: apply cluster state (from master [master {dvphuFi}{dvphuFiRT8aEJicgCiXprg}{UpdneAxRTKSslmuiTqQJ5g}{172.18.0.12}{172.18.0.12:9300} committed version [1] source [zen-disco-elected-as-master ([0] nodes joined)]])
[2017-08-14T14:01:40,458][INFO ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] publish_address {172.18.0.12:9200}, bound_addresses {0.0.0.0:9200}
[2017-08-14T14:01:40,459][INFO ][o.e.n.Node               ] [dvphuFi] started
[2017-08-14T14:01:41,760][WARN ][o.e.g.GatewayAllocator$InternalPrimaryShardAllocator] [dvphuFi] [logstash-logs-2017.08.03][4]: failed to list shard for shard_started on node [dvphuFiRT8aEJicgCiXprg]
org.elasticsearch.action.FailedNodeException: Failed node [dvphuFiRT8aEJicgCiXprg]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.onFailure(TransportNodesAction.java:239) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction.access$200(TransportNodesAction.java:153) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$AsyncAction$1.handleException(TransportNodesAction.java:211) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleException(TransportService.java:1060) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.processException(TransportService.java:1164) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1142) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$7.onFailure(TransportService.java:661) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.onFailure(ThreadContext.java:623) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:39) [elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) [?:1.8.0_144]
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) [?:1.8.0_144]
	at java.lang.Thread.run(Thread.java:748) [?:1.8.0_144]
Caused by: org.elasticsearch.transport.RemoteTransportException: [dvphuFi][172.18.0.12:9300][internal:gateway/local/started_shards[n]]
Caused by: org.elasticsearch.ElasticsearchException: failed to load started shards
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:171) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	... 3 more
Caused by: org.elasticsearch.ElasticsearchException: java.io.IOException: failed to read [id:165, legacy:false, file:/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st]
	at org.elasticsearch.ExceptionsHelper.maybeThrowRuntimeAndSuppress(ExceptionsHelper.java:150) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:334) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	... 3 more
Caused by: java.io.IOException: failed to read [id:165, legacy:false, file:/data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st]
	at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:327) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	... 3 more
Caused by: java.nio.file.NoSuchFileException: /data/elasticsearch/nodes/0/indices/4xs7KSPsSymjyMiTEDdsIA/_state/state-165.st
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:86) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102) ~[?:?]
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107) ~[?:?]
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214) ~[?:?]
	at java.nio.file.Files.newByteChannel(Files.java:361) ~[?:1.8.0_144]
	at java.nio.file.Files.newByteChannel(Files.java:407) ~[?:1.8.0_144]
	at org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77) ~[lucene-core-7.0.0-snapshot-00142c9.jar:7.0.0-snapshot-00142c9 00142c921322a92de5007be2a114893aaa072498 - jpountz - 2017-07-11 09:24:13]
	at org.elasticsearch.gateway.MetaDataStateFormat.read(MetaDataStateFormat.java:187) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.MetaDataStateFormat.loadLatestState(MetaDataStateFormat.java:322) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:128) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.gateway.TransportNodesListGatewayStartedShards.nodeOperation(TransportNodesListGatewayStartedShards.java:62) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction.nodeOperation(TransportNodesAction.java:140) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:262) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.support.nodes.TransportNodesAction$NodeTransportHandler.messageReceived(TransportNodesAction.java:258) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$7.doRun(TransportService.java:650) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:638) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	... 3 more
[2017-08-14T14:01:42,027][INFO ][o.e.g.GatewayService     ] [dvphuFi] recovered [125] indices into cluster_state
[2017-08-14T14:01:53,844][INFO ][o.e.c.r.a.AllocationService] [dvphuFi] Cluster health status changed from [RED] to [YELLOW] (reason: [shards started [[logstash-logs-2017.08.09][0], [logstash-logs-2017.08.09][3], [.kibana][0], [logstash-logs-2017.08.09][1]] ...]).
[2017-08-14T14:02:07,612][ERROR][o.e.t.n.Netty4Utils      ] fatal error on the network layer
	at org.elasticsearch.transport.netty4.Netty4Utils.maybeDie(Netty4Utils.java:179)
	at org.elasticsearch.http.netty4.Netty4HttpRequestHandler.exceptionCaught(Netty4HttpRequestHandler.java:81)
	at io.netty.channel.AbstractChannelHandlerContext.invokeExceptionCaught(AbstractChannelHandlerContext.java:285)
	at io.netty.channel.AbstractChannelHandlerContext.notifyHandlerException(AbstractChannelHandlerContext.java:850)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:364)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at org.elasticsearch.http.netty4.pipelining.HttpPipeliningHandler.channelRead(HttpPipeliningHandler.java:63)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.handler.codec.MessageToMessageCodec.channelRead(MessageToMessageCodec.java:111)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:310)
	at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:284)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:86)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340)
	at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1334)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362)
	at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348)
	at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:926)
	at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:134)
	at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:644)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:544)
	at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:498)
	at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:458)
	at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:858)
	at java.lang.Thread.run(Thread.java:748)
[2017-08-14T14:02:07,628][WARN ][o.e.h.n.Netty4HttpServerTransport] [dvphuFi] caught exception while handling client http traffic, closing connection [id: 0x17bb1be3, L:/127.0.0.1:9200 - R:/127.0.0.1:56592]
java.lang.StackOverflowError: null
	at org.elasticsearch.common.util.concurrent.AbstractRefCounted.incRef(AbstractRefCounted.java:41) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.index.store.Store.incRef(Store.java:366) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.index.engine.Engine.acquireSearcher(Engine.java:504) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.index.shard.IndexShard.acquireSearcher(IndexShard.java:1111) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.search.SearchService.createSearchContext(SearchService.java:572) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.search.SearchService.canMatch(SearchService.java:905) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:431) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:428) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:644) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.access$000(TransportService.java:74) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:137) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:592) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:512) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:552) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService.sendCanMatch(SearchTransportService.java:114) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.executePhaseOnShard(CanMatchPreFilterSearchPhase.java:68) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.performPhaseOnShard(InitialSearchPhase.java:160) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:149) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:207) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardResult(InitialSearchPhase.java:190) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase$1.innerOnResponse(InitialSearchPhase.java:164) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchActionListener.onResponse(SearchActionListener.java:45) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchActionListener.onResponse(SearchActionListener.java:29) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.ActionListenerResponseHandler.handleResponse(ActionListenerResponseHandler.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$ContextRestoreResponseHandler.handleResponse(TransportService.java:1053) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.processResponse(TransportService.java:1127) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1117) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$DirectResponseChannel.sendResponse(TransportService.java:1106) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.DelegatingTransportChannel.sendResponse(DelegatingTransportChannel.java:60) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry$TransportChannelWrapper.sendResponse(RequestHandlerRegistry.java:108) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:432) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService$12.messageReceived(SearchTransportService.java:428) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:66) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendLocalRequest(TransportService.java:644) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.access$000(TransportService.java:74) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService$3.sendRequest(TransportService.java:137) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendRequestInternal(TransportService.java:592) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendRequest(TransportService.java:512) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.transport.TransportService.sendChildRequest(TransportService.java:552) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.SearchTransportService.sendCanMatch(SearchTransportService.java:114) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.CanMatchPreFilterSearchPhase.executePhaseOnShard(CanMatchPreFilterSearchPhase.java:68) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.performPhaseOnShard(InitialSearchPhase.java:160) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.maybeExecuteNext(InitialSearchPhase.java:149) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.successfulShardExecution(InitialSearchPhase.java:207) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.onShardResult(InitialSearchPhase.java:190) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase.access$000(InitialSearchPhase.java:46) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]
	at org.elasticsearch.action.search.InitialSearchPhase$1.innerOnResponse(InitialSearchPhase.java:164) ~[elasticsearch-6.0.0-beta1.jar:6.0.0-beta1]

@jasontedor
Copy link
Member

Can you send me the data directory? (We can arrange a method to share privately if necessary).

@jasontedor
Copy link
Member

jasontedor commented Aug 14, 2017

Also, can you share the output: cat /proc/1/limits | grep "Max stack size" or ulimit -s (from the container)?

@algestam
Copy link
Author

Thanks, I am unfortunately not able to give you access to the data directory as it contains logs files with sensible data. I will try to reproduce this with other data and if I succeed I will definitely share that dir with you.

Here is the output from cat /proc/1/limits | grep "Max stack size":

root@79b8a65639cc:/elasticsearch# cat /proc/1/limits | grep "Max stack size"
Max stack size            8388608              unlimited            bytes

ulimit -s:

root@79b8a65639cc:/elasticsearch# hal@hal-vm:~/development/EGMSlurper$ ulimit -s
8192

@jasontedor
Copy link
Member

Okay, those look reasonable.

I understand about the data, so a reproduction will certainly help to get to the bottom of this one. I'm going to reopen this issue.

Meanwhile, I've marked you as eligible for the Pioneer Program.

@jasontedor jasontedor reopened this Aug 15, 2017
@colings86 colings86 added :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts >bug labels Aug 17, 2017
@jasontedor jasontedor removed the :Delivery/Packaging RPM and deb packaging, tar and zip archives, shell and batch scripts label Aug 28, 2017
@jasontedor
Copy link
Member

Thanks for the report @algestam. I have found the problem and opened #26484.

@algestam
Copy link
Author

algestam commented Sep 3, 2017

Good news :) Thanks @jasontedor!

@colings86 colings86 added the :Search/Search Search-related issues that do not fall into other categories label Sep 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
>bug Pioneer Program :Search/Search Search-related issues that do not fall into other categories
Projects
None yet
Development

No branches or pull requests

3 participants