-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nest doesn't handle exceptions properly when http content length exceeds #945
Comments
This behaviour is related to: Without a fix for that in the elasticsearch core its hard for us to differentiate between a request to large request and a genuine transport layer exception that should be retried on a different node. Closing this since we should see the correct behaviour as soon as the linked elasticsearch core issue is fixed. Thanks a ton for opening this issue and alerting us to this behaviour @satishmallik 👍 |
…red so we can start nodes in desired states they should become relevant again
Closed this a tad too early there is one situation where we should early exit: When not using connection pooling and setting max retries we should not retry on connection exceptions:
|
This is now implemented per 7e504eb |
I want to add one point here. Nest knows the http.max_content_length. Post serialization if the total size is > max_content_lenght Nest itself can throw OutOfMemory Exception or, can wrap in some other other defined exception. There are 2 benefits to it,
|
@satishmallik thanks for pursuing this (very much appreciated!) As it stands now we do not know the active Once Is implemented though. the status code that'll be returned (413) can be used to break it down into smaller chunks! |
Is this solved yet? i need to know when this happens to handle it breaking the request in smaller chunks, is there any flag or option that i can set to just throw an exception? |
In my testing I am finding following,
When http content length exceeds on elasticsearch, There are 2 issues,
From ES I can see following logs,
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes.
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:169)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)
But while catching Nest Exception it comes with KeepAliveFailure status and error is
"The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."
The text was updated successfully, but these errors were encountered: