Skip to content

The same search statement multiple times gets different total results #113256

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
killersteps opened this issue Sep 20, 2024 · 21 comments
Closed

The same search statement multiple times gets different total results #113256

killersteps opened this issue Sep 20, 2024 · 21 comments
Labels
:Search Foundations/Search Catch all for Search Foundations Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch

Comments

@killersteps
Copy link

One sentence description

When I search using the following statement, the search gets different total hits multiple times.

For example:

"hits": {
"total": {
"value": 6000000(This value is unstable and may change occasionally, but in fact the data in this index will never change from a business perspective),
"relation": "eq"
},
...
}

Basic Information:

Elasticsearch version: 7.17.1
Cpu architecture: aarch64
Jdk version: 1.8

Cluster overview: 9 nodes, including 6 data nodes and 3 master nodes(Server resources and jvm resources are absolutely sufficient.)
Index(my_index) overview: 9 primary shards, 1 replica shard

Search statement

http://:9200/my_index/_search?request_cache=false&explain=true&track_total_hits=true&q=*

In the above search statement, I used three key parameters, corresponding to different purposes:
track_total_hits=true: to get an accurate hits total value
request_cache=false: disable query cache, eliminate the impact of cache
explain=true: explain the search, used to assist query

In actual applications, the query has some conditions. I am using q=* to check for unstable total doc counts. And again, the data in these problematic indexes will never change.

In addition, when checking some basic status through /_cluster/health and /_cat/shards, they are all GREEN and STARTED

It is worth mentioning that I have carefully analyzed this issue in combination with the sharing in this article: https://www.elastic.co/guide/en/elasticsearch/reference/7.17/consistent-scoring.html#_scores_are_not_reproducible

When I search using preference=_shards:0,1,2,3,4,5,6,7,8, the problem still recurs randomly

My current conclusion is that the committed status of some segments of the shards allocated on the datanode03 node of the cluster is false

When I use preference=_shards:0,1,2,3,4,5 to filter out the three shards 6,7,8 allocated on datanode03 (use /cat/shards/my_index to get this information), the hits total value becomes stable

But the most incomprehensible situation is that when observing all parameters such as os, jvm, mem, etc. through /nodes/datanode03/_stats, no possible abnormal information is found. They all look normal.

But the most incomprehensible situation is that I observed all the parameters such as os, jvm, mem, etc. through /nodes/datanode03/_stats, and no possible abnormal information was found. They all looked normal. I currently only have some segments that are not committed and the shards where these segments are located are coincidentally allocated on the datanode03 node. This is a valuable piece of information.

I want to get three help or discussion:

  1. Is /_flush necessary? I initially judged that this action can solve this problem, but it is difficult to reproduce the phenomenon of why some segments are difficult to commit in other environments. In other words, if we don’t know why this situation occurs, we can’t avoid the problem from happening again in the future
  2. From the server level and node level, we have analyzed that the datanode03 server does not have any abnormal information, logs, etc. How can I judge whether this node really has problems? From the phenomenon of all problematic indexes, all of them are segments in the shard allocated to this node
  3. Will committed=false and searchable=false in the segment really affect the simplest query statement of _search?request_cache=false&explain=true&track_total_hits=true&q=*?
@elasticsearchmachine elasticsearchmachine added the needs:triage Requires assignment of a team area label label Sep 20, 2024
@gwbrown gwbrown added :Search/Search Search-related issues that do not fall into other categories and removed needs:triage Requires assignment of a team area label labels Sep 20, 2024
@elasticsearchmachine elasticsearchmachine added the Team:Search Meta label for search team label Sep 20, 2024
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search (Team:Search)

@gwbrown
Copy link
Contributor

gwbrown commented Sep 20, 2024

Thanks for the report; I've tagged this for the appropriate team who will likely know more, but I will note that we've released several bugfix releases in 7.17.x series that contain significant fixes and my first recommended step would be to attempt reproducing on the latest 7.17.x version. (I know that may be easier said than done, however.)

@killersteps
Copy link
Author

killersteps commented Sep 21, 2024

Thanks for the report; I've tagged this for the appropriate team who will likely know more, but I will note that we've released several bugfix releases in 7.17.x series that contain significant fixes and my first recommended step would be to attempt reproducing on the latest 7.17.x version. (I know that may be easier said than done, however.)

I think there is one thing I may have forgotten to mention. That is, before raising this issue, I have spent a long time looking up a lot of information, including release notes, etc. https://www.elastic.co/guide/en/elasticsearch/reference/7.17/release-notes-7.17.1.html, and there is really no better and valuable information. The main reason is that I am not sure whether the _flush operation can solve this problem. As you said, it is difficult to upgrade the production environment in a short time or reproduce such a difficult problem in other environments.

And I want to know more about why a segment in an index whose data does not change is not automatically committed for a long time. It is particularly important to find out the cause of this problem and what other means can be used to troubleshoot it, because it can avoid encountering this problem again in the future (if _flush can solve the current problem).Thanks~

One more thing to add, I can't reproduce this problem in an environment with relatively low server configuration but the same deployment architecture. The segments are automatically committed after a period of time. To be precise, I am still not sure whether the unstable search hits total is related to the uncommitted segments. Maybe you can help me with this conclusion. Thanks again.

@killersteps
Copy link
Author

Oh, there is another new discovery. When I execute /index/_refresh, there will be a pending state without response for a long time. I can't restart the cluster at will in the production environment, but when I use _cluster/_settings put log configuration, it seems that only "logger._root": "debug" can take effect, and the rest of the parts cannot take effect.

{
    "transient": {
        "logger._root": "info",
        "logger.org.elasticsearch.index": "debug",
        "logger.org.elasticsearch.indices": "debug",
        "logger.org.elasticsearch.cluster": "debug",
        "logger.org.elasticsearch.rest": "debug",
        "logger.org.elasticsearch.http": "debug"
    }
}

I cannot get the expected response or see the corresponding log information after executing /_flush or /_refresh, but my cluster works very well from beginning to end and is always in the GREEN state. All shards are STARTED

@killersteps
Copy link
Author

killersteps commented Sep 27, 2024

After more detailed investigation, it was found that there were too many tasks in the refresh thread pool queue of the problematic data node, and there was no downward trend.
I checked the thread stack through jstack and found that the refresh thread was stuck on the lock of this method: maybeRefreshBlocking
Source code path:src/main/java/org/elasticsearch/index/engine/InternalEngine.java:370

@killersteps
Copy link
Author

killersteps commented Oct 8, 2024

@gwbrown Hi friend, is the more professional team still following up on this issue?Do my new findings help me find the cause of this problem?

@benwtrent benwtrent added the :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. label Oct 8, 2024
@elasticsearchmachine elasticsearchmachine added the Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. label Oct 8, 2024
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-distributed (Team:Distributed)

@kingherc
Copy link
Contributor

kingherc commented Oct 8, 2024

Searches search the refreshed data. So if refreshes do not go through for some reason, that's a reason you may be getting inconsistent results. Assuming you finally refresh the index (e.g., with a manual call to the refresh API), your search results should be stable (for the so-far ingested data).

Feel free to paste the hot threads of the affected node to check where threads are and what they may be doing. This forum topic may also be relevant for you, which can be verified by the hot threads.

@killersteps
Copy link
Author

killersteps commented Oct 9, 2024

Searches search the refreshed data. So if refreshes do not go through for some reason, that's a reason you may be getting inconsistent results. Assuming you finally refresh the index (e.g., with a manual call to the refresh API), your search results should be stable (for the so-far ingested data).

Feel free to paste the hot threads of the affected node to check where threads are and what they may be doing. This forum topic may also be relevant for you, which can be verified by the hot threads.

@kingherc When the problem was first discovered, I used GET /_nodes/hot_threads to check the hot threads of the datanode, but I didn't find anything special. I used jstack to collect thread stack information of the node many times, and saw that the threads related to the refresh thread pool were waiting for the results of threads such as <0x0000fff767a424c0>, but I couldn't find the running information of the related threads in the dump file. They had disappeared, which means that these threads did not release the locks they obtained at the end.
Here I would like to add a picture:
Image

@kingherc
Copy link
Contributor

@killersteps the stack trace seems fairly similar to the forum topic's I mentioned.

They had disappeared, which means that these threads did not release the locks they obtained at the end.

We do a lot of async processing, so I am not sure we can definitely say they "lost" the lock and won't ultimately release it.

I think it'd be better if you paste/attach here the whole output of hot threads, and maybe a couple of the jstack outputs, while the problem is occurring.

@killersteps
Copy link
Author

killersteps commented Oct 11, 2024

@killersteps the stack trace seems fairly similar to the forum topic's I mentioned.

They had disappeared, which means that these threads did not release the locks they obtained at the end.

We do a lot of async processing, so I am not sure we can definitely say they "lost" the lock and won't ultimately release it.

I think it'd be better if you paste/attach here the whole output of hot threads, and maybe a couple of the jstack outputs, while the problem is occurring.

@kingherc First of all, thank you for your reply and guidance. I carefully read the subject you mentioned and compared the stack information in the article. Unfortunately, I did not find the call to the DeleteFile0 method in the thread stack of the refresh thread pool in my jstack stack. So I judged that although the two problems have similar thread stack information, they are two different problems. In addition, my cluster is deployed on a Linux server instead of a Windows server. I collected some jstack information about the refresh thread of the datanode node to make some supplements. See if you can help me take a look? Because I am not very familiar with the source code of elasticsearch, thank you again!
Image
Image
Image

@killersteps
Copy link
Author

@kingherc When I looked at the stack trace, it seemed like it had something to do with replica synchronization. Oh, this is indeed a tricky problem.

@killersteps
Copy link
Author

killersteps commented Oct 11, 2024

This is the status of the refresh thread pool of this node. You can see that many tasks are queued in the queue, while other nodes do not have this situation. The impact is that the committed and searchable states of many segments are false.
Image

@kingherc
Copy link
Contributor

Indeed, the issue is that refreshes are somehow the bottleneck. However, there does not seem to be anything interesting from the above-mentioned hot threads. We still need to understand what they are waiting on, and without a full thread stack it may be hard to understand just from the screenshots. I'd suggest to try this to get all thread stacks, or using jstack again.

@killersteps
Copy link
Author

killersteps commented Oct 17, 2024

Indeed, the issue is that refreshes are somehow the bottleneck. However, there does not seem to be anything interesting from the above-mentioned hot threads. We still need to understand what they are waiting on, and without a full thread stack it may be hard to understand just from the screenshots. I'd suggest to try this to get all thread stacks, or using jstack again.

@kingherc Since this problem has been delayed for a long time, we decided to restart the datanode07 node recently to solve the problem of the refresh thread pool being full, and manually perform _flush and _refresh operations. The following pictures are some hot thread information collected based on your suggestions. I am sorry that my ability is limited and I still cannot locate the root cause of the problem. I hope this can help your team and guide us to find the root cause.
Image
Image
Image
In the process of analyzing hot threads, we found an interesting thing: only 2 of the 10 active threads in the refresh thread pool are executing the doMaybeRefresh method, and all other stack information is stuck at the maybeRefreshBlocking line.

Finally, I would like to ask a question. If the data in an index will not change, should the committed and searchable values ​​obtained through the _cat/segments/index?v API always be true? Because according to the basic situation we have located in this issue, the committed and searchable values ​​of several segments in the shards allocated on the datanode07 node are false.

@kingherc
Copy link
Contributor

Seeing this last stacktrace seems like it's stuck in the BitsetFilterCache while inside a refresh. I think this is more relevant for the Search team to answer (due to the BitsetFilterCache), so for the moment I'll remove the Distributed team from the ticket, to make it clear that Search team should comment on this.

If the data in an index will not change, should the committed and searchable values ​​obtained through the _cat/segments/index?v API always be true?

Note that an index needs to be refreshed in order to get an up-to-date count of documents. And if the refreshes did not go through, that's potentially why you might have observed an unexpected number of documents.

@kingherc kingherc removed :Distributed Indexing/Engine Anything around managing Lucene and the Translog in an open shard. Team:Distributed (Obsolete) Meta label for distributed team (obsolete). Replaced by Distributed Indexing/Coordination. labels Oct 18, 2024
@benwtrent benwtrent added :Search Foundations/Search Catch all for Search Foundations and removed :Search/Search Search-related issues that do not fall into other categories labels Oct 18, 2024
@elasticsearchmachine elasticsearchmachine added Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch and removed Team:Search Meta label for search team labels Oct 18, 2024
@elasticsearchmachine
Copy link
Collaborator

Pinging @elastic/es-search-foundations (Team:Search Foundations)

@killersteps
Copy link
Author

killersteps commented Oct 21, 2024

Seeing this last stacktrace seems like it's stuck in the BitsetFilterCache while inside a refresh. I think this is more relevant for the Search team to answer (due to the BitsetFilterCache), so for the moment I'll remove the Distributed team from the ticket, to make it clear that Search team should comment on this.

If the data in an index will not change, should the committed and searchable values ​​obtained through the _cat/segments/index?v API always be true?

Note that an index needs to be refreshed in order to get an up-to-date count of documents. And if the refreshes did not go through, that's potentially why you might have observed an unexpected number of documents.

@kingherc @benwtrent We restarted the datanode07 node to solve the problem. Currently, the query results are stable, and the refresh thread pool works normally, and no tasks are accumulated in the queue. -, -
I am worried that this problem will happen again in production environment~

@killersteps
Copy link
Author

killersteps commented Oct 31, 2024

@kingherc @benwtrent @gwbrown @DaveCTurner Unfortunately, this problem reappeared in another elasticsearch cluster in our production environment. We checked the relevant thread dump information and obtained the following results:

`"elasticsearch[node-2][refresh][T#2]" #86 daemon prio=5 os_prio=0 tid=0x0000fffc78007000 nid=0xb047e waiting on condition [0x0000fffc01c3d000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c2957eb8> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at org.elasticsearch.index.cache.bitset.BitsetFilterCache$BitSetProducerWarmer.lambda$warmReader$2(BitsetFilterCache.java:272)
at org.elasticsearch.index.cache.bitset.BitsetFilterCache$BitSetProducerWarmer$$Lambda$5804/1523681687.awaitTermination(Unknown Source)
at org.elasticsearch.index.IndexWarmer.warm(IndexWarmer.java:68)
at org.elasticsearch.index.IndexService.lambda$createShard$8(IndexService.java:477)
at org.elasticsearch.index.IndexService$$Lambda$5644/1696668972.warm(Unknown Source)
at org.elasticsearch.index.shard.IndexShard.lambda$newEngineConfig$22(IndexShard.java:3356)
at org.elasticsearch.index.shard.IndexShard$$Lambda$5685/2050239501.warm(Unknown Source)
at org.elasticsearch.index.engine.InternalEngine$RefreshWarmerListener.accept(InternalEngine.java:2592)
at org.elasticsearch.index.engine.InternalEngine$RefreshWarmerListener.accept(InternalEngine.java:2577)
at org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded(InternalEngine.java:375)
at org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded(InternalEngine.java:350)
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:225)
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:1891)
at org.elasticsearch.index.engine.InternalEngine.maybeRefresh(InternalEngine.java:1870)
at org.elasticsearch.index.shard.IndexShard.scheduledRefresh(IndexShard.java:3910)
at org.elasticsearch.index.IndexService.maybeRefreshEngine(IndexService.java:917)
at org.elasticsearch.index.IndexService.access$200(IndexService.java:102)
at org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal(IndexService.java:1043)
at org.elasticsearch.common.util.concurrent.AbstractAsyncTask.run(AbstractAsyncTask.java:133)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

Locked ownable synchronizers:
- <0x000000062f255400> (a java.util.concurrent.ThreadPoolExecutor$Worker)
- <"elasticsearch[node-2][refresh][T#2]" #86 daemon prio=5 os_prio=0 tid=0x0000fffc78007000 nid=0xb047e waiting on condition [0x0000fffc01c3d000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006c2957eb8> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at org.elasticsearch.index.cache.bitset.BitsetFilterCache$BitSetProducerWarmer.lambda$warmReader$2(BitsetFilterCache.java:272)
at org.elasticsearch.index.cache.bitset.BitsetFilterCache$BitSetProducerWarmer$$Lambda$5804/1523681687.awaitTermination(Unknown Source)
at org.elasticsearch.index.IndexWarmer.warm(IndexWarmer.java:68)
at org.elasticsearch.index.IndexService.lambda$createShard$8(IndexService.java:477)
at org.elasticsearch.index.IndexService$$Lambda$5644/1696668972.warm(Unknown Source)
at org.elasticsearch.index.shard.IndexShard.lambda$newEngineConfig$22(IndexShard.java:3356)
at org.elasticsearch.index.shard.IndexShard$$Lambda$5685/2050239501.warm(Unknown Source)
at org.elasticsearch.index.engine.InternalEngine$RefreshWarmerListener.accept(InternalEngine.java:2592)
at org.elasticsearch.index.engine.InternalEngine$RefreshWarmerListener.accept(InternalEngine.java:2577)
at org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded(InternalEngine.java:375)
at org.elasticsearch.index.engine.InternalEngine$ExternalReaderManager.refreshIfNeeded(InternalEngine.java:350)
at org.apache.lucene.search.ReferenceManager.doMaybeRefresh(ReferenceManager.java:176)
at org.apache.lucene.search.ReferenceManager.maybeRefresh(ReferenceManager.java:225)
at org.elasticsearch.index.engine.InternalEngine.refresh(InternalEngine.java:1891)
at org.elasticsearch.index.engine.InternalEngine.maybeRefresh(InternalEngine.java:1870)
at org.elasticsearch.index.shard.IndexShard.scheduledRefresh(IndexShard.java:3910)
at org.elasticsearch.index.IndexService.maybeRefreshEngine(IndexService.java:917)
at org.elasticsearch.index.IndexService.access$200(IndexService.java:102)
at org.elasticsearch.index.IndexService$AsyncRefreshTask.runInternal(IndexService.java:1043)
at org.elasticsearch.common.util.concurrent.AbstractAsyncTask.run(AbstractAsyncTask.java:133)
at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingRunnable.run(ThreadContext.java:718)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:750)

Locked ownable synchronizers:
- <0x000000062f255400> (a java.util.concurrent.ThreadPoolExecutor$Worker)
- <0x000000064581c6d8> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)> (a java.util.concurrent.locks.ReentrantLock$NonfairSync)`

Our team compiled version 7.17.1 locally, added relevant logs, and found that all deadlocks occurred in the following method: org.elasticsearch.index.cache.bitset.BitsetFilterCache.BitSetProducerWarmer#warmReader. We suspect that the occurrence of this lock is related to the existence of a nested structure in the index mapping. This mechanism seems to be a warm-up function for this index when elasticsearch is searching.
Image

I'm not sure if the deadlock occurs because the CountDownLatch counter does not reach 0 or because of the internalReaderManager.maybeRefreshBlocking(); logic of org.elasticsearch.index.engine.InternalEngine.ExternalReaderManager#refreshIfNeeded (I prefer the latter because the comments of this method are very interesting and it looks easy to have problems, but it is lucene code)
Image
Image
Image

In short, when the problem occurs, there will be a lot of tasks in the refresh thread pool queue, and the search results will become unstable. I wonder if the expert team can give us some ideas for further troubleshooting and analysis?

There are a few details that need to be explained here:

  1. Our cluster does not have bootstrap.memory_lock turned on
  2. Swap is not turned off on our node servers
  3. We use a lot of VIRT on each node, but the memory is relatively sufficient (in other words, I don’t think that when the memory is insufficient, elasticsearch’s logic here will deadlock)
  4. When we use our own compiled source code for deployment, after executing: curl -XPUT 'ip:port/index/_settings' -d '{
    "index.warmer.enabled": false
    }', the relevant code in the abnormal thread dump will not be executed, for example: org.elasticsearch.index.cache.bitset.BitsetFilterCache.BitSetProducerWarmer#warmReader

@killersteps
Copy link
Author

killersteps commented Oct 31, 2024

Please help @DaveCTurner to look at this very tricky problem, but I think it is just a phenomenon similar to the following discussion[https://discuss.elastic.co/t/massive-queue-in-refresh-thread-pool-on-a-single-node-causing-timeouts/280732/18]. The reason is not because of the [email protected]/sun.nio.fs.WindowsNativeDispatcher.DeleteFile0(Native Method) system call, but because of whether the warm-up of the nested query has a deadlock? Or is the org.elasticsearch.index.engine.InternalEngine.ExternalReaderManager#refreshIfNeeded method deadlocked at internalReaderManager.maybeRefreshBlocking();?

@DaveCTurner
Copy link
Contributor

DaveCTurner commented Oct 31, 2024

Thanks very much for your interest in Elasticsearch.

This appears to be a user question, and we'd like to direct these kinds of things to the Elasticsearch forum. If you can stop by there, we'd appreciate it.

Specifically, you haven't provided a clear sequence of steps to reproduce the problem, nor have you shared the information requested above (complete jstack output and hot threads) for us to try and analyse the problem as it's occurring on your machine. Your deadlock-detection tool seems to be finding false positives: as you mentioned, the process does actually complete so it cannot be deadlocked. Moreover you're using a version which is over a year and a half old, with no other reports of similar issues, so the overwhelmingly likely probability is that there's something wrong in your environment rather than anything we can address in Elasticsearch. Unfortunately there's no action we can take here, so this is not the right place to continue this discussion.

There's an active community in the forum that should be able to help get an answer to your question. As such, I hope you don't mind that I close this.

@DaveCTurner DaveCTurner closed this as not planned Won't fix, can't repro, duplicate, stale Oct 31, 2024
@elastic elastic locked as off-topic and limited conversation to collaborators Oct 31, 2024
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
:Search Foundations/Search Catch all for Search Foundations Team:Search Foundations Meta label for the Search Foundations team in Elasticsearch
Projects
None yet
Development

No branches or pull requests

6 participants