-
Notifications
You must be signed in to change notification settings - Fork 41.1k
The Jest Elasticsearch health indicator uses a potentially heavy call to check its status #9379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
I agree, the output of that command can be quite large. It seems that the Do you know of other potential endpoints that we could use? |
In the case of Searchly, all administrative features of ElasticSearch are restricted from the API, according to their documentation. That doesn't leave us with a lot of room, does it? I'm sure that Searchly have their reasons to do so, but that is one case out of millions of users, and many of them are not developing in shared environments with special restrictions. I still believe that the right endpoint to call by default should be Instead of having |
I think we should treat this one as a bug so I've targeted it to 1.5.4. We've only got a few days before that release so we might not get to it in time. It still feels like a solution in 1.5.x would be useful. |
Note: it's not a case of Searchly vs. rest of the world. It's more a case of ElasticSearch instances running on premise vs. in the cloud (the official elastic.co cloud offering or the AWS one). Even companies running their own cluster can have X-Pack enabled and disable cluster info access from applications. |
The team just discussed this topic and it seems that changing the default for all setups out there might not be the solution we're looking for. Having a strategy that uses one or the other might be more useful; in fact, the Right now, defining your own bean named We could consider switching strategies and providing different information automatically, but I'm not aware of any clear signal in the configuration that says if you've got access to those endpoints or not (besides trying and possibly failing). Any idea about that @rodol-fo ? Let's keep this issue opened until we've figured this out. |
@rodol-fo Your input would be useful here. Please see the comment from @bclozel above. |
Hello @bclozel, I think you are right. The only bulletproof way of knowing if a particular ES endpoint is available is by trying a request on it. However, in my experience I'm not so sure about I hope this helps |
Thanks, @rodol-fo. I think I've changed my opinion on this. Having spent some more time reading around, and reading the Elasticsearch documentation in particular, I now think that Perhaps we should consider leaving this as-is in 1.5 and making a change in 2.0 instead? |
That last option has my vote. |
It seems the In other words, we can't use the Java native transport API to fetch that information. The Java native transport should be deprecated in the future, see here and here. With all that in mind, a couple of options:
@snicoll, @wilkinsona, @philwebb, @rodol-fo - let me know what you think and if I've missed something. |
I don't think we should do much in 1.5.x except perhaps add a warning to the documentation. For 2.x, I guess option 2 makes the most sense in the long-term, but it's certainly something we can tackle in 2.1 if necessary. |
Depends on #12600 |
#12600 is now fixed, and we can try a global approach for that change now. |
The ES REST client will be supported by Spring Data ElasticSearch in the next Moore release train. As of #12600, we're auto-configuring those clients in Spring Boot. I'm closing this issue in favour of #14914 since Jest is now supporting the cluster endpoint. This aligns the Jest health indicator with what we've got for the transport client already. We'll figure the rest out in #15008. |
The spring-boot-actuator Elasticsearch health check performs a call to
/_all/_stats
which potentially comes with a big response. In my case, this response is ~8MB at the moment and regular health checks were causing a huge load on the system. We create indices at an hourly basis. IMHO, using/_cat/health
is better suited for this taskThe text was updated successfully, but these errors were encountered: