-
Notifications
You must be signed in to change notification settings - Fork 1.6k
Trace information is missing when Exception is thrown from KafkaListener methods #1704
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Also waiting for fix. It was unexpected that ErrorHandler is not getting original traceId context from Sleuth. |
My requirement is: all logs related to particular batch of processed Kafka records should share the trace context information. For However I'd like to start the trace earlier: just before the Kafka Consumer Right now there is no mechanism to intercept Currently there are few scenarios that loose the trace context:
It would be great if somehow all of these scenarios were smart enough to understand that |
@kdowbecki I wonder if the I don't see an easy way to handle the Similarly for |
@garyrussell Thanks for suggesting
|
That is exactly why I suggested adding new methods to the interceptor. One called before the poll and the other after all processing is complete (including error handlers). |
@garyrussell That would work for me, are the new methods planned for upcoming release (e.g. in some other open issue)? If not, is it fine if I raise a PR? |
No, there is no other issue. The next 2.8 milestone is due Sept 20. Since this would not be a breaking change (assuming the new methods are default) we can back port it to 2.7.x, the next of which is due the same day. Thanks for the offer of a PR. we need to enhance RecordInterceptor too. |
@garyrussell Does kdowbecki@90c9138 look like what you were thinking about? I'd appreciate early feedback before I dive deeper into it, this is my first time contributing to Spring Kafka. I'm not fully getting the concept of sub batches in |
Looks good to me, with a few suggestions.
I also think we should beef up the javadocs, e.g. for And for the after... "Use this method to clean up any thread-bound resources set up by the interceptor". I think it would be best to proceed to a PR so we can comment directly there (it's much easier for us that way). Thanks |
Thanks for suggestions, I applied them and opened a PR: #1912. Let's move the discussion there. |
* GH-1704: Broader Batch/RecordInterceptor * GH-1704: More pessimistic finally * Precise logger message * Renaming methods and improving Javadoc. Still need to work on race condition in the tests because beforePoll() and afterRecordsProcessed() is called while the test is setting up data. * Fixing tests * Fixing tests * Moving beforePoll() and adding InOrder tests * Reverting integration tests * Extracting BeforeAfterPollProcessor * Renaming afterPoll to clearThreadState * Fixing compiler warnings in tests
@garyrussell One interesting aspect of this issue is |
Right; the |
Resolves spring-projects#1704 - Extract `ThreadStateProcessor` and `ConsumerAwareThreadStateProcessor` - Avoid all the if/else tests around calling the interceptor common methods - Add `afterRecord` to the record interceptor so sleuth can defer cleaning up until after the error handler.
Polishing PR: #1946 |
Resolves #1704 - Extract `ThreadStateProcessor` - Avoid all the if/else tests around calling the interceptor common methods - Add `afterRecord` to the record interceptor so sleuth can defer cleaning up until after the error handler. * Remove ThreadStateProcessor (TSP), rename CATSP to TSP. - confusing for interceptor implementors - no components ever call `setupThreadState()`; ARP and CEH build state during normal processing.
Affects Version(s): All currently supported versions (including
2.6.5
)🐞 Bug report
The issue was originally reported for Spring Cloud Sleuth but it seems it is not a Sleuth issue (and I can't transfer issues across orgs) so I'm opening this one to track it at the right place.
The original issue is this: spring-cloud/spring-cloud-sleuth#1659 opened by @m-grzesiak
It contains the details to understand what is going on, a sample project that reproduces the issue and some investigation details. The issue is very similar to spring-cloud/spring-cloud-sleuth#1660
Description
When an exception is thrown from a method annotated with
@KafkaListener
all tracing information is lost when it reaches the error handler and error logs do not contain tracing information.Sample: https://github.com/jonatan-ivanov/sleuth-gh-1659
The sample project uses
spring-kafka
2.6.4
but the issue must be present in the latestspring-kafka
too (2.6.5
).Investigation details: spring-cloud/spring-cloud-sleuth#1659 (also see spring-cloud/spring-cloud-sleuth#1660)
Possible fix (breaking change): should be similar to spring-projects/spring-amqp#1287
The text was updated successfully, but these errors were encountered: