Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nest doesn't handle exceptions properly when http content length exceeds #945

Closed
satishmallik opened this issue Sep 14, 2014 · 6 comments
Closed

Comments

@satishmallik
Copy link

In my testing I am finding following,

When http content length exceeds on elasticsearch, There are 2 issues,

  1. Nest is not throwing right exception to figure http content length problem,
  2. If MaxRretry is set Nest keeps on sending requests to Elasticsearch though we know this is a non-retryable exception,

From ES I can see following logs,
org.elasticsearch.common.netty.handler.codec.frame.TooLongFrameException: HTTP content length exceeded 104857600 bytes.
at org.elasticsearch.common.netty.handler.codec.http.HttpChunkAggregator.messageReceived(HttpChunkAggregator.java:169)
at org.elasticsearch.common.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:564)
at org.elasticsearch.common.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:791)

But while catching Nest Exception it comes with KeepAliveFailure status and error is
"The underlying connection was closed: A connection that was expected to be kept alive was closed by the server."

@Mpdreamz
Copy link
Member

This behaviour is related to:

elastic/elasticsearch#2902

Without a fix for that in the elasticsearch core its hard for us to differentiate between a request to large request and a genuine transport layer exception that should be retried on a different node.

Closing this since we should see the correct behaviour as soon as the linked elasticsearch core issue is fixed.

Thanks a ton for opening this issue and alerting us to this behaviour @satishmallik 👍

Mpdreamz added a commit that referenced this issue Sep 15, 2014
…red so we can start nodes in desired states they should become relevant again
@Mpdreamz Mpdreamz reopened this Sep 15, 2014
@Mpdreamz
Copy link
Member

Closed this a tad too early there is one situation where we should early exit:

When not using connection pooling and setting max retries we should not retry on connection exceptions:

var settings = new ConnectionSettings(new Uri("http://localhost:9200"))
    .MaximumRetries(2);
var client = new ElasticClient(settings);

@Mpdreamz
Copy link
Member

This is now implemented per 7e504eb

@satishmallik
Copy link
Author

I want to add one point here. Nest knows the http.max_content_length. Post serialization if the total size is > max_content_lenght Nest itself can throw OutOfMemory Exception or, can wrap in some other other defined exception. There are 2 benefits to it,

  1. It will avoid network traffic which we know is bound to fail.
  2. Applications can use this status to break down the batch into smaller chunk

@Mpdreamz
Copy link
Member

@satishmallik thanks for pursuing this (very much appreciated!)

As it stands now we do not know the active max_content_length, NEST does not do any magic like that i.e actively query cluster/node settings/mappings before doing indexations or searches and we probably never will either.

Once

elastic/elasticsearch#2902

Is implemented though. the status code that'll be returned (413) can be used to break it down into smaller chunks!

@sbochoa
Copy link

sbochoa commented Apr 14, 2016

Is this solved yet? i need to know when this happens to handle it breaking the request in smaller chunks, is there any flag or option that i can set to just throw an exception?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants