Skip to content

using KinesisAsyncClient: An exceptionCaught() event was fired. java.io.IOException: The channel was closed before the protocol could be determined. #2914

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
filipglojnari opened this issue Dec 13, 2021 · 7 comments
Assignees
Labels
bug This issue is a bug. closed-for-staleness response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 10 days.

Comments

@filipglojnari
Copy link

Describe the bug

java.io.IOException: The channel was closed before the protocol could be determined. exception is thrown when using the aws-sdk KinesisAsyncClient. We start multiple threads and they are all publishing to the kinesis stream via KinesisAsyncClient.putRecord(). After some time (~6h) the application is running, we can see the warn logs, followed by exception that causes them.

WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

java.io.IOException: The channel was closed before the protocol could be determined.

Expected behavior

This should not be thrown to the user.

Current behavior

Warn and exception are thrown after publishing (for longer than ~6h) to kinesis stream from multi-thread env without knowing what exactly caused the exception from our code. Full message and stacktrace:

2021-11-02 09:46:59,487 4017786 [aws-java-sdk-NettyEventLoop-1-1] WARN i.n.channel.DefaultChannelPipeline - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.

  java.io.IOException: The channel was closed before the protocol could be determined.
          at software.amazon.awssdk.http.nio.netty.internal.http2.Http2SettingsFrameHandler.channelUnregistered(Http2SettingsFrameHandler.java:58)
          at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:198)        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:184)        at io.netty.channel.AbstractChannelHandlerContext.fireChannelUnregistered(AbstractChannelHandlerContext.java:177)  
          at io.netty.channel.DefaultChannelPipeline$HeadContext.channelUnregistered(DefaultChannelPipeline.java:1388)       
          at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:198)        at io.netty.channel.AbstractChannelHandlerContext.invokeChannelUnregistered(AbstractChannelHandlerContext.java:184)        at io.netty.channel.DefaultChannelPipeline.fireChannelUnregistered(DefaultChannelPipeline.java:821)
          at io.netty.channel.AbstractChannel$AbstractUnsafe$8.run(AbstractChannel.java:827)
          at io.netty.util.concurrent.AbstractEventExecutor.safeExecute(AbstractEventExecutor.java:164)
          at io.netty.util.concurrent.SingleThreadEventExecutor.runAllTasks(SingleThreadEventExecutor.java:472)
          at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:500)
          at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989)
          at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
          at java.base/java.lang.Thread.run(Thread.java:834)

Steps to Reproduce

Publishing in multi-thread env to kinesis via KinesisAsyncClient.putRecord() for some time (~6h), and warn will be written out.

Possible Solution

Possible connected issue:
#2713

Context

No response

AWS Java SDK version used

2.17.90

JDK version used

OpenJDK Runtime Environment AdoptOpenJDK (build 11.0.8+10)

Operating System and version

Ubuntu 18.04.5 LTS

@filipglojnari filipglojnari added bug This issue is a bug. needs-triage This issue or PR still needs to be triaged. labels Dec 13, 2021
@noahdembe
Copy link

Have you resolved this issue? In a production environment I had the same thing happen (has only happened once in a few months of running) but its the same error and stopped all kinesis consumers - im guessing the whole thread stopped. Trying to find the root cause but have been unable to thus far.

@debora-ito debora-ito self-assigned this Mar 23, 2022
@debora-ito
Copy link
Member

@filipglojnari @noahdembe I'm sorry for taking this long to respond.

We made changes to the Netty client connection pool management recently, could you use a newer version of the SDK and confirm if you still see the errors? The latest version is 2.17.154 but some significant changes started in version 2.17.101.

If you still see the errors we'll investigate.

@noahdembe which version of the SDK are you using?

@debora-ito debora-ito added response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 10 days. and removed needs-triage This issue or PR still needs to be triaged. labels Mar 23, 2022
@mndfcked
Copy link

I got the same error with 2.17.156

@github-actions github-actions bot removed the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 10 days. label Mar 25, 2022
@debora-ito
Copy link
Member

@mndfcked is your use case the same as described by @filipglojnari?

After some time (~6h) the application is running, we can see the warn logs, followed by exception that causes them.

@mndfcked
Copy link

@mndfcked is your use case the same as described by @filipglojnari?

After some time (~6h) the application is running, we can see the warn logs, followed by exception that causes them.

Hi, sorry for the delay. Unfortunately not. We started getting the message right away when our application started. One thing to note here, we were using Kotlin in combination with Coroutines.

@debora-ito
Copy link
Member

@mndfcked Getting the error right at the application startup is not good. Can you share a repro code?

@debora-ito debora-ito added the response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 10 days. label Sep 21, 2022
@github-actions
Copy link

It looks like this issue has not been active for more than five days. In the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please add a comment to prevent automatic closure, or if the issue is already closed please feel free to reopen it.

@github-actions github-actions bot added closing-soon This issue will close in 4 days unless further comments are made. closed-for-staleness and removed closing-soon This issue will close in 4 days unless further comments are made. labels Sep 26, 2022
aws-sdk-java-automation added a commit that referenced this issue Mar 4, 2024
…6260b6e50

Pull request: release <- staging/75c4b575-53bc-4b5e-87ae-e2d6260b6e50
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug This issue is a bug. closed-for-staleness response-requested Waiting on additional info and feedback. Will move to "closing-soon" in 10 days.
Projects
None yet
Development

No branches or pull requests

4 participants