Skip to content

create_time instrumentation change in 1.1.0 leading to consumer crash #836

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
jpaskhay opened this issue Jun 10, 2020 · 3 comments · Fixed by #841
Closed

create_time instrumentation change in 1.1.0 leading to consumer crash #836

jpaskhay opened this issue Jun 10, 2020 · 3 comments · Fixed by #841

Comments

@jpaskhay
Copy link

If this is a bug report, please fill out the following:

  • Version of Ruby: 2.5.8
  • Version of Kafka: 2.2.2
  • Version of ruby-kafka: 1.1.0

Confirmed the bug is in the current master branch.

Steps to reproduce

We specifically are running a Fluentd image with latest fluent-plugin-kafka (0.13.0) and ruby-kafka (1.1.0) gems and using the kafka_group plugin to consume messages from Kafka. Should theoretically happen with any consumer client that is not running on Rails.

Expected outcome

Messages would be consumed properly.

Actual outcome

Fails to consume with the below stacktrace:

2020-06-10 19:53:52 +0000 [error]: #0 unexpected error during consuming events from kafka. Re-fetch events. error="undefined method `try' for #<Kafka::FetchedMessage:0x00005607411170a8>"
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:319:in `block (2 levels) in each_batch'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:310:in `each'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:310:in `block in each_batch'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:414:in `block in consumer_loop'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/instrumenter.rb:23:in `instrument'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/instrumenter.rb:35:in `instrument'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:412:in `consumer_loop'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/ruby-kafka-1.1.0/lib/kafka/consumer.rb:307:in `each_batch'
  2020-06-10 19:53:52 +0000 [error]: #0 /usr/lib/ruby/gems/2.5.0/gems/fluent-plugin-kafka-0.13.0/lib/fluent/plugin/in_kafka_group.rb:230:in `run'
2020-06-10 19:53:52 +0000 [warn]: #0 Stopping Consumer
2020-06-10 19:53:52 +0000 [warn]: #0 Could not connect to broker. retry_time:1. Next retry will be in 30 seconds

It appears the .try method is a Rails extension and not pure Ruby in the below line that was added as part of #811 :
last_create_time: batch.messages.last.try(:create_time),

I believe you should be able to just use batch.messages.last.create_time directly, unless I'm missing a reason why the original change was attempting to read it in an optional manner.

@dasch
Copy link
Contributor

dasch commented Jun 17, 2020

Ah!

Can you create a PR that changes that line to batch.messages.any? && batch.messages.last.create_time?

@Darhazer
Copy link
Contributor

That would lead to a false value in case there are no messages, and not nil
Maybe batch.messages.last && batch.messages.last.create_time (perhaps with extracting it to a variable).
If it wasn't for Ruby 2.2 support it could be just batch.messages.last&.create_time

@dasch
Copy link
Contributor

dasch commented Jun 17, 2020

You're right. I think batch.messages.last && batch.messages.last.create_time is fine 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants