Skip to content

OOM -> LockObtainFailedException -> All data lost #19576

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
DaveChapman opened this issue Jul 25, 2016 · 10 comments
Closed

OOM -> LockObtainFailedException -> All data lost #19576

DaveChapman opened this issue Jul 25, 2016 · 10 comments

Comments

@DaveChapman
Copy link

Hi,

We experienced an issue today where we saw OOM errors, LockObtainFailedException exceptions and some others (shown in the traces below). We restarted the node (this is a cluster of a single node hosting a number of indices with 5 shards in each index). Following the restart ALL the data was gone from all indices and a check of the file system showed less than 1MB in the data directories of Elastic (previously this would have been Gigs of data).

I've used the title above as it looks very similar to this issue:
#12041
but I think its similar to this also.....
http://stackoverflow.com/questions/32494095/all-the-data-suddenly-removed-from-the-elasticsearch-cluster

Please find my responses to your standard questions below.

Thanks in advance for any help!

Elasticsearch version: 2.3.2

JVM version:
java version "1.8.0_91"
Java(TM) SE Runtime Environment (build 1.8.0_91-b14)
Java HotSpot(TM) 64-Bit Server VM (build 25.91-b14, mixed mode)

OS version: CentOS Linux release 7.2.1511 (Core)

Description of the problem including expected versus actual behavior:
After restart due to out of memory all index data directories are missing, the index directories are still present, but contain only _state, no data.

Provide logs (if relevant):
[WARN ][indices.breaker.request ] [request] New used memory 2179536984 [2gb] for data of [<reused_arrays>] would be larger than configured breaker: 2102132736 [1.9gb], breaking

[DEBUG][action.search ] [] [][4], node[9wlPuDpBSB6oGedoRiI7-Q], [P], v[4], s[STARTED], a[id=wtkeXgBiRv-o9DVnrbOGnw]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1eb229c6] lastShard [true]
RemoteTransportException[[][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [2102132736/1.9gb]];
Caused by: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [2102132736/1.9gb]];
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:409)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:113)
at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:366)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:378)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: CircuitBreakingException[[request] Data too large, data for [<reused_arrays>] would be larger than limit of [2102132736/1.9gb]]
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.circuitBreak(ChildMemoryCircuitBreaker.java:97)
at org.elasticsearch.common.breaker.ChildMemoryCircuitBreaker.addEstimateBytesAndMaybeBreak(ChildMemoryCircuitBreaker.java:147)
at org.elasticsearch.common.util.BigArrays.adjustBreaker(BigArrays.java:396)
at org.elasticsearch.common.util.BigArrays.validate(BigArrays.java:433)
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:458)
at org.elasticsearch.common.util.BigArrays.resize(BigArrays.java:475)
at org.elasticsearch.common.util.BigArrays.grow(BigArrays.java:489)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.ensureCapacity(HyperLogLogPlusPlus.java:197)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.collect(HyperLogLogPlusPlus.java:230)
at org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregator$DirectCollector.collect(CardinalityAggregator.java:203)
at org.elasticsearch.search.aggregations.LeafBucketCollector$3.collect(LeafBucketCollector.java:73)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectBucket(BucketsAggregator.java:72)
at org.elasticsearch.search.aggregations.bucket.range.RangeAggregator$1.collect(RangeAggregator.java:184)
at org.elasticsearch.search.aggregations.bucket.range.RangeAggregator$1.collect(RangeAggregator.java:138)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80)
at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator$2.collect(GlobalOrdinalsStringTermsAggregator.java:130)
at org.elasticsearch.search.aggregations.LeafBucketCollector.collect(LeafBucketCollector.java:88)
at org.apache.lucene.search.MultiCollector$MultiLeafCollector.collect(MultiCollector.java:174)
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:221)
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:821)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:384)
... 12 more

[DEBUG][action.search ] [] [][0], node[9wlPuDpBSB6oGedoRiI7-Q], [P], v[4], s[STARTED], a[id=jsJyGbCqQW677Mcg3OK1fg]: Failed to execute [org.elasticsearch.action.search.SearchRequest@1eb229c6] lastShard [true]
RemoteTransportException[[][x.x.x.x:9300][indices:data/read/search[phase/query]]]; nested: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: OutOfMemoryError[Java heap space];
Caused by: QueryPhaseExecutionException[Query Failed [Failed to execute main query]]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:409)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:113)
at org.elasticsearch.search.SearchService.loadOrExecuteQueryPhase(SearchService.java:366)
at org.elasticsearch.search.SearchService.executeQueryPhase(SearchService.java:378)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:368)
at org.elasticsearch.search.action.SearchServiceTransportAction$SearchQueryTransportHandler.messageReceived(SearchServiceTransportAction.java:365)
at org.elasticsearch.transport.TransportRequestHandler.messageReceived(TransportRequestHandler.java:33)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space
at org.elasticsearch.cache.recycler.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:102)
at org.elasticsearch.cache.recycler.PageCacheRecycler$1.newInstance(PageCacheRecycler.java:99)
at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:53)
at org.elasticsearch.common.recycler.AbstractRecycler.obtain(AbstractRecycler.java:33)
at org.elasticsearch.common.recycler.DequeRecycler.obtain(DequeRecycler.java:28)
at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39)
at org.elasticsearch.common.recycler.Recyclers$3.obtain(Recyclers.java:119)
at org.elasticsearch.common.recycler.FilterRecycler.obtain(FilterRecycler.java:39)
at org.elasticsearch.cache.recycler.PageCacheRecycler.bytePage(PageCacheRecycler.java:150)
at org.elasticsearch.common.util.AbstractBigArray.newBytePage(AbstractBigArray.java:108)
at org.elasticsearch.common.util.BigByteArray.(BigByteArray.java:45)
at org.elasticsearch.common.util.BigArrays.newByteArray(BigArrays.java:451)
at org.elasticsearch.common.util.BigArrays.resize(BigArrays.java:475)
at org.elasticsearch.common.util.BigArrays.grow(BigArrays.java:489)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.ensureCapacity(HyperLogLogPlusPlus.java:197)
at org.elasticsearch.search.aggregations.metrics.cardinality.HyperLogLogPlusPlus.collect(HyperLogLogPlusPlus.java:230)
at org.elasticsearch.search.aggregations.metrics.cardinality.CardinalityAggregator$DirectCollector.collect(CardinalityAggregator.java:203)
at org.elasticsearch.search.aggregations.LeafBucketCollector$3.collect(LeafBucketCollector.java:73)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectBucket(BucketsAggregator.java:72)
at org.elasticsearch.search.aggregations.bucket.range.RangeAggregator$1.collect(RangeAggregator.java:184)
at org.elasticsearch.search.aggregations.bucket.range.RangeAggregator$1.collect(RangeAggregator.java:138)
at org.elasticsearch.search.aggregations.bucket.BucketsAggregator.collectExistingBucket(BucketsAggregator.java:80)
at org.elasticsearch.search.aggregations.bucket.terms.GlobalOrdinalsStringTermsAggregator$2.collect(GlobalOrdinalsStringTermsAggregator.java:130)
at org.elasticsearch.search.aggregations.LeafBucketCollector.collect(LeafBucketCollector.java:88)
at org.apache.lucene.search.MultiCollector$MultiLeafCollector.collect(MultiCollector.java:174)
at org.apache.lucene.search.Weight$DefaultBulkScorer.scoreAll(Weight.java:221)
at org.apache.lucene.search.Weight$DefaultBulkScorer.score(Weight.java:172)
at org.apache.lucene.search.BulkScorer.score(BulkScorer.java:39)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:821)
at org.apache.lucene.search.IndexSearcher.search(IndexSearcher.java:535)
at org.elasticsearch.search.query.QueryPhase.execute(QueryPhase.java:384)

[WARN ][index.engine ] [] [][0] failed engine [refresh failed]
java.lang.OutOfMemoryError: Java heap space

[DEBUG][action.bulk ] [] failed to execute [BulkShardRequest to [] containing [1] requests] on [[][4]]
[][[][4]] EngineClosedException[CurrentState[CLOSED] Closed]
at org.elasticsearch.index.engine.Engine.ensureOpen(Engine.java:329)
at org.elasticsearch.index.engine.InternalEngine.get(InternalEngine.java:317)
at org.elasticsearch.index.shard.IndexShard.get(IndexShard.java:652)
at org.elasticsearch.index.get.ShardGetService.innerGet(ShardGetService.java:173)
at org.elasticsearch.index.get.ShardGetService.get(ShardGetService.java:86)
at org.elasticsearch.action.update.UpdateHelper.prepare(UpdateHelper.java:77)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardUpdateOperation(TransportShardBulkAction.java:383)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:191)
at org.elasticsearch.action.bulk.TransportShardBulkAction.shardOperationOnPrimary(TransportShardBulkAction.java:68)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryPhase.doRun(TransportReplicationAction.java:639)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:279)
at org.elasticsearch.action.support.replication.TransportReplicationAction$PrimaryOperationTransportHandler.messageReceived(TransportReplicationAction.java:271)
at org.elasticsearch.transport.RequestHandlerRegistry.processMessageReceived(RequestHandlerRegistry.java:75)
at org.elasticsearch.transport.TransportService$4.doRun(TransportService.java:376)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

[WARN ][index.shard ] [] [][4] Failed to perform scheduled engine refresh
[][[][4]] RefreshFailedEngineException[Refresh failed]; nested: OutOfMemoryError[Java heap space];
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:680)
at org.elasticsearch.index.shard.IndexShard.refresh(IndexShard.java:661)
at org.elasticsearch.index.shard.IndexShard$EngineRefresher$1.run(IndexShard.java:1349)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

[WARN ][indices.cluster ] [] [[][2]] marking and sending shard failed due to [engine failure, reason [refresh failed]]
java.lang.OutOfMemoryError: Java heap space

[WARN ][cluster.action.shard ] [] [][2] received shard failed for target shard [[][2], node[9wlPuDpBSB6oGedoRiI7-Q], [P], v[4], s[STARTED], a[id=1SRVPoVZTlaB3Qdt2lyCSQ]], indexUUID [qmKpqmQ2RYKaIzlqxXya6w], message [engine failure, reason [refresh failed]], failure [OutOfMemoryError[Java heap space]]
java.lang.OutOfMemoryError: Java heap space

[WARN ][indices.memory ] [] failed to set shard [][0] index buffer to [62.6mb]
org.apache.lucene.store.AlreadyClosedException: this IndexWriter is closed
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:720)
at org.apache.lucene.index.IndexWriter.ensureOpen(IndexWriter.java:734)
at org.apache.lucene.index.IndexWriter.ramBytesUsed(IndexWriter.java:475)
at org.elasticsearch.index.engine.InternalEngine.indexWriterRAMBytesUsed(InternalEngine.java:948)
at org.elasticsearch.index.shard.IndexShard.updateBufferSize(IndexShard.java:1149)
at org.elasticsearch.indices.memory.IndexingMemoryController.updateShardBuffers(IndexingMemoryController.java:232)
at org.elasticsearch.indices.memory.IndexingMemoryController$ShardsIndicesStatusChecker.run(IndexingMemoryController.java:286)
at org.elasticsearch.threadpool.ThreadPool$LoggingRunnable.run(ThreadPool.java:640)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.OutOfMemoryError: Java heap space

[WARN ][indices.cluster ] [] [[][4]] marking and sending shard failed due to [failed to create shard]
[][[][4]] ElasticsearchException[failed to create shard]; nested: LockObtainFailedException[Can't lock shard [][4], timed out after 5000ms];
at org.elasticsearch.index.IndexService.createShard(IndexService.java:389)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:601)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:501)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:166)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [lowercasekey-poi-geoshape-1466686817402][4], timed out after 5000ms
at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:609)
at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:537)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:306)
... 10 more

[WARN ][cluster.action.shard ] [] [][4] received shard failed for target shard [[][4], node[9wlPuDpBSB6oGedoRiI7-Q], [P], v[5], s[INITIALIZING], a[id=oOVK7CO-TsSLK7aWor8q5w], unassigned_info[[reason=ALLOCATION_FAILED], at[2016-07-25T09:26:03.243Z], details[engine failure, reason [refresh failed], failure OutOfMemoryError[Java heap space]]]], indexUUID [qmKpqmQ2RYKaIzlqxXya6w], message [failed to create shard], failure [ElasticsearchException[failed to create shard]; nested: LockObtainFailedException[Can't lock shard [][4], timed out after 5000ms]; ]
[][[][4]] ElasticsearchException[failed to create shard]; nested: LockObtainFailedException[Can't lock shard [][4], timed out after 5000ms];
at org.elasticsearch.index.IndexService.createShard(IndexService.java:389)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyInitializingShard(IndicesClusterStateService.java:601)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.applyNewOrUpdatedShards(IndicesClusterStateService.java:501)
at org.elasticsearch.indices.cluster.IndicesClusterStateService.clusterChanged(IndicesClusterStateService.java:166)
at org.elasticsearch.cluster.service.InternalClusterService.runTasksForExecutor(InternalClusterService.java:610)
at org.elasticsearch.cluster.service.InternalClusterService$UpdateTask.run(InternalClusterService.java:772)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.lucene.store.LockObtainFailedException: Can't lock shard [][4], timed out after 5000ms
at org.elasticsearch.env.NodeEnvironment$InternalShardLock.acquire(NodeEnvironment.java:609)
at org.elasticsearch.env.NodeEnvironment.shardLock(NodeEnvironment.java:537)
at org.elasticsearch.index.IndexService.createShard(IndexService.java:306)
... 10 more

[WARN ][netty.channel.DefaultChannelPipeline] An exception was thrown by an exception handler.
java.util.concurrent.RejectedExecutionException: Worker has already been shutdown
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.registerTask(AbstractNioSelector.java:120)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:72)
at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at org.jboss.netty.channel.socket.nio.AbstractNioWorker.executeInIoThread(AbstractNioWorker.java:56)
at org.jboss.netty.channel.socket.nio.NioWorker.executeInIoThread(NioWorker.java:36)
at org.jboss.netty.channel.socket.nio.AbstractNioChannelSink.execute(AbstractNioChannelSink.java:34)
at org.jboss.netty.channel.DefaultChannelPipeline.execute(DefaultChannelPipeline.java:636)
at org.jboss.netty.channel.Channels.fireExceptionCaughtLater(Channels.java:496)
at org.jboss.netty.channel.AbstractChannelSink.exceptionCaught(AbstractChannelSink.java:46)
at org.jboss.netty.channel.DefaultChannelPipeline.notifyHandlerException(DefaultChannelPipeline.java:658)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:781)
at org.jboss.netty.channel.Channels.write(Channels.java:725)
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.doEncode(OneToOneEncoder.java:71)
at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:59)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendDownstream(DefaultChannelPipeline.java:784)
at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.handleDownstream(HttpPipeliningHandler.java:87)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:591)
at org.jboss.netty.channel.DefaultChannelPipeline.sendDownstream(DefaultChannelPipeline.java:582)
at org.elasticsearch.http.netty.NettyHttpChannel.sendResponse(NettyHttpChannel.java:146)
at org.elasticsearch.rest.action.support.RestResponseListener.processResponse(RestResponseListener.java:43)
at org.elasticsearch.rest.action.support.RestActionListener.onResponse(RestActionListener.java:49)
at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:89)
at org.elasticsearch.action.support.TransportAction$1.onResponse(TransportAction.java:85)
at org.elasticsearch.action.bulk.TransportBulkAction$2.finishHim(TransportBulkAction.java:356)
at org.elasticsearch.action.bulk.TransportBulkAction$2.onFailure(TransportBulkAction.java:351)
at org.elasticsearch.action.support.TransportAction$1.onFailure(TransportAction.java:95)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.finishAsFailed(TransportReplicationAction.java:567)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onClusterServiceClose(TransportReplicationAction.java:552)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.onClose(ClusterStateObserver.java:222)
at org.elasticsearch.cluster.service.InternalClusterService.add(InternalClusterService.java:282)
at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:153)
at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:98)
at org.elasticsearch.cluster.ClusterStateObserver.waitForNextChange(ClusterStateObserver.java:90)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retry(TransportReplicationAction.java:544)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.retryBecauseUnavailable(TransportReplicationAction.java:596)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase.doRun(TransportReplicationAction.java:465)
at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:37)
at org.elasticsearch.action.support.replication.TransportReplicationAction$ReroutePhase$2.onNewClusterState(TransportReplicationAction.java:547)
at org.elasticsearch.cluster.ClusterStateObserver$ObserverClusterStateListener.postAdded(ClusterStateObserver.java:206)
at org.elasticsearch.cluster.service.InternalClusterService$1.run(InternalClusterService.java:296)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.runAndClean(PrioritizedEsThreadPoolExecutor.java:231)
at org.elasticsearch.common.util.concurrent.PrioritizedEsThreadPoolExecutor$TieBreakingPrioritizedRunnable.run(PrioritizedEsThreadPoolExecutor.java:194)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

[WARN ][rest.suppressed ] //_search Params: {index=}
Failed to execute phase [query], all shards failed; shardFailures {[9wlPuDpBSB6oGedoRiI7-Q][][0]: TransportException[transport stopped, action: indices:data/read/search[phase/query]]}{[9wlPuDpBSB6oGedoRiI7-Q][][1]: TransportException[transport stopped, action: indices:data/read/search[phase/query]]}{[9wlPuDpBSB6oGedoRiI7-Q][][2]: TransportException[transport stopped, action: indices:data/read/search[phase/query]]}{[9wlPuDpBSB6oGedoRiI7-Q][][3]: TransportException[transport stopped, action: indices:data/read/search[phase/query]]}{[9wlPuDpBSB6oGedoRiI7-Q][][4]: TransportException[transport stopped, action: indices:data/read/search[phase/query]]}
at org.elasticsearch.action.search.AbstractSearchAsyncAction.onFirstPhaseResult(AbstractSearchAsyncAction.java:206)
at org.elasticsearch.action.search.AbstractSearchAsyncAction$1.onFailure(AbstractSearchAsyncAction.java:152)
at org.elasticsearch.action.ActionListenerResponseHandler.handleException(ActionListenerResponseHandler.java:46)
at org.elasticsearch.transport.TransportService$2.run(TransportService.java:206)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: TransportException[transport stopped, action: indices:data/read/search[phase/query]]
... 4 more

[INFO ][node ] [] stopped
[INFO ][node ] [] closing ...
[INFO ][node ] [] version[2.3.2], pid[26270], build[b9e4a6a/2016-04-21T16:03:47Z]
[INFO ][node ] [] initializing ...

[INFO ][node ] [] initialised
[INFO ][node ] [] starting ...
[INFO ][transport ] [] publish_address {10.10.20.10:9300}, bound_addresses {[::]:9300}
[INFO ][discovery ] [] dev-elastic/RSMtDui2TvS4pzY6vvdJng
[INFO ][node ] [] closed
[INFO ][cluster.service ] [] new_master {}{RSMtDui2TvS4pzY6vvdJng}{x.x.x.x}{x.x.x.x:9300}{master=true}, reason: zen-disco-join(elected_as_master, [0] joins received)
[INFO ][http ] [] publish_address {x.x.x.x:9200}, bound_addresses {[::]:9200}
[INFO ][node ] [] started
[INFO ][gateway ] [] recovered [0] indices into cluster_state

@bleskes
Copy link
Contributor

bleskes commented Jul 25, 2016

Ouch. I'm not sure at all that the LockObtainFailedException is relevant to the data loss. I do see how it's an artifact of an OOM as things may not be cleared correctly after an OOM , which is why we recommend restarting the node.

The problem is caused by the fact that one the node has been restarted, it doesn't find any cluster state in it's data folder and thinks all indices are gone:

[INFO ][gateway ] [] recovered [0] indices into cluster_state

While that is troubling on it's own, it doesn't still explain the data lost as normally ES will reimport that indices from their data folders, with the following message in the logs:

dangling index, exists on local file system, but not in cluster metadata

the index directories are still present, but contain only _state, no data.

It's a bit weird also that you do have _state folders and files. When ES deletes indices it will delete all the folder as well.

Can you provide a list of the current directory structure, with the files?

If there is anything else that you think is relevant, given the above explanation - please share.

@DaveChapman
Copy link
Author

Hi, an update to this: the data seems to still be on disk (I had forgotten that we were writing to a number of directories).

Anyway our configuration is as follows:

path.data: /data0/data,/data1/data,/data2/data

And our data directories are sized as:

924K /data0/data
72G /data1/data
90G /data2/data

However, data is not being returned from our searches before the time we restarted Elastic this morning (new items are visible however). The cluster state is yellow.

@DaveChapman
Copy link
Author

Hi, another update. We have restarted the node again and the indices/data have reappeared, however data loaded since the previous restart is missing. Any ideas as to what may have happened here?

@bleskes
Copy link
Contributor

bleskes commented Jul 26, 2016

I suspect something is off in your cluster configuration causing the paths within the data folders to change. Either path locking issues or maybe something stupid like the cluster name changes with a restart. That's why I ask for a dump you current directory structure. If you rather keep it private it you can mail it to my first name at elastic.co

@DaveChapman
Copy link
Author

Hi, I will need to check if I can send you the directory structure, however this may not be helpful as everything is currently working as expected. We are trying to replicate the issue at the moment.

@DaveChapman
Copy link
Author

So we have managed to trigger the OOM again, however without the LockObtainFailedException. This time the node was able to recover on it's own.

@bleskes
Copy link
Contributor

bleskes commented Jul 26, 2016

I strongly suspect it had to do with the node restart not the OOM. It's also very hard to predict/reproduce how an OOM will affect the node.

@DaveChapman
Copy link
Author

We have replicated the OOM with the LockObtainFailedException and see the following in the logs after a restart:
[WARN ][rest.suppressed ] /_bulk Params: {} ClusterBlockException[blocked by: [SERVICE_UNAVAILABLE/1/state not recovered / initialized];[SERVICE_UNAVAILABLE/2/no master];] at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedException(ClusterBlocks.java:154) at org.elasticsearch.cluster.block.ClusterBlocks.globalBlockedRaiseException(ClusterBlocks.java:144) at org.elasticsearch.action.bulk.TransportBulkAction.executeBulk(TransportBulkAction.java:212) at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:164) at org.elasticsearch.action.bulk.TransportBulkAction.doExecute(TransportBulkAction.java:71) at org.elasticsearch.action.support.TransportAction.doExecute(TransportAction.java:149) at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:137) at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:85) at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.client.FilterClient.doExecute(FilterClient.java:52) at org.elasticsearch.rest.BaseRestHandler$HeadersAndContextCopyClient.doExecute(BaseRestHandler.java:83) at org.elasticsearch.client.support.AbstractClient.execute(AbstractClient.java:359) at org.elasticsearch.client.support.AbstractClient.bulk(AbstractClient.java:436) at org.elasticsearch.rest.action.bulk.RestBulkAction.handleRequest(RestBulkAction.java:90) at org.elasticsearch.rest.BaseRestHandler.handleRequest(BaseRestHandler.java:54) at org.elasticsearch.rest.RestController.executeHandler(RestController.java:205) at org.elasticsearch.rest.RestController.dispatchRequest(RestController.java:166) at org.elasticsearch.http.HttpServer.internalDispatchRequest(HttpServer.java:128) at org.elasticsearch.http.HttpServer$Dispatcher.dispatchRequest(HttpServer.java:86) at org.elasticsearch.http.netty.NettyHttpServerTransport.dispatchRequest(NettyHttpServerTransport.java:449) at org.elasticsearch.http.netty.HttpRequestHandler.messageReceived(HttpRequestHandler.java:61) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.http.netty.pipelining.HttpPipeliningHandler.messageReceived(HttpPipeliningHandler.java:60) at org.jboss.netty.channel.SimpleChannelHandler.handleUpstream(SimpleChannelHandler.java:88) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:145) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.handler.codec.http.HttpContentDecoder.messageReceived(HttpContentDecoder.java:108) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296) at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:459) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.callDecode(ReplayingDecoder.java:536) at org.jboss.netty.handler.codec.replay.ReplayingDecoder.messageReceived(ReplayingDecoder.java:435) at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791) at org.elasticsearch.common.netty.OpenChannelsHandler.handleUpstream(OpenChannelsHandler.java:75) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564) at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:559) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268) at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255) at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:108) at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:89) at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178) at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745)

After the restart the indices were again missing, they came back after a second restart.

@bleskes
Copy link
Contributor

bleskes commented Jul 26, 2016

@DaveChapman thx, sadly it doesn't add much info. I will either need a complete directory listing or debug logs from before and after the issue. Can you send me those?

@bleskes
Copy link
Contributor

bleskes commented Jul 27, 2016

@DaveChapman thx for your e-mail. The problem lies in the fact the after the node restart, the nodes picks up a different data folder lock ordinal then before, causing it not see the previous data. This is setup this way to support multiple nodes on the same server - when a node starts up, it will look for the first ordinal it can lock (others are used by other nodes) and use that. Looking at the logs you sent me, I see what node shutting down and then a new one starting up:

[2016-07-25 10:39:57,637][INFO ][node                     ] [dev3_es] stopped
[2016-07-25 10:39:57,637][INFO ][node                     ] [dev3_es] closing ...
[2016-07-25 10:39:57,637][INFO ][node                     ] [dev3_es] stopped
[2016-07-25 10:39:57,637][INFO ][node                     ] [dev3_es] closing ...
[2016-07-25 10:40:00,792][INFO ][node                     ] [dev3_es] version[2.3.2], pid[26270], build[b9e4a6a/2016-04-21T16:03:47Z]
[2016-07-25 10:40:00,793][INFO ][node                     ] [dev3_es] initializing ...

the weird thing is that we miss a log message saying closed from the first node. See here.

Also, the new nodes starts up but somehow disappears and a new one is started again:

[2016-07-25 10:40:06,370][INFO ][node                     ] [dev3_es] initialized
[2016-07-25 10:40:06,370][INFO ][node                     ] [dev3_es] starting ...

Any idea what's going on?

Also, you can consider using node.max_local_storage: 1 see here.

I'm closing this now as it seems to be environment related. If not we can reopen. Just a heads up - when if/you re-open, it will be nice to get debug logging of the node startup, with the addition of setting the env logger to TRACE, it will tell some more of the paths it ended using and why.

@bleskes bleskes closed this as completed Jul 27, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants