You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Under worst case conditions, a request that is bad for the system, thereby taking a long time to process, often gets repeated as a user--or even an automated system--becomes impatient while they await their response. If an exactly duplicated request comes into the same node to run on the same shard (otherwise it's not an exact match of course), then it would be ideal in all but one scenario to simply dupe the eventual response from the original request.
Unfortunately, the exact match nature of this eliminates a slew of requests from benefiting from it (at least until 3.0 when we can start to recognize inner parts better to drop irrelevant filters), but it provides a lot of protection from a very common problem: a user hitting refresh to send what was a recoverable situation into a downward spiral leading to OutOfMemoryError if you are lucky or, worse, permanent GCs that never recover.
The scenario where this is less ideal is the case where a request expects the most up-to-date information (e.g., details from segments that may have been created since the original request started running). Since search is not real time, and the performance of the system is at stake from a presumably slow, repeated request, this seems like an edge case to me. Perhaps providing a flag to use the existing behavior (just create another shard level request) would be enough to allow those cases to "demand" this behavior.
It should be as simple as adding a new response listener to the original request, which entirely side steps adding another thread to do the exact same work, but "simple" ends there as I do not believe we provide any way to easily find/attach to other requests. As a result, this is related to task management in #6914.
The text was updated successfully, but these errors were encountered:
I think this is at least partly solved by the request cache now? if the query is cachable then subsequent requests for the same exact search will either wait on the single cached future for the request or just retrieve the cached result if the query has finished executing?
Under worst case conditions, a request that is bad for the system, thereby taking a long time to process, often gets repeated as a user--or even an automated system--becomes impatient while they await their response. If an exactly duplicated request comes into the same node to run on the same shard (otherwise it's not an exact match of course), then it would be ideal in all but one scenario to simply dupe the eventual response from the original request.
Unfortunately, the exact match nature of this eliminates a slew of requests from benefiting from it (at least until 3.0 when we can start to recognize inner parts better to drop irrelevant filters), but it provides a lot of protection from a very common problem: a user hitting refresh to send what was a recoverable situation into a downward spiral leading to
OutOfMemoryError
if you are lucky or, worse, permanent GCs that never recover.The scenario where this is less ideal is the case where a request expects the most up-to-date information (e.g., details from segments that may have been created since the original request started running). Since search is not real time, and the performance of the system is at stake from a presumably slow, repeated request, this seems like an edge case to me. Perhaps providing a flag to use the existing behavior (just create another shard level request) would be enough to allow those cases to "demand" this behavior.
It should be as simple as adding a new response listener to the original request, which entirely side steps adding another thread to do the exact same work, but "simple" ends there as I do not believe we provide any way to easily find/attach to other requests. As a result, this is related to task management in #6914.
The text was updated successfully, but these errors were encountered: