You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using Kafka::Client#deliver_message, the producer is not able to recover when the broker who is the leader of the partition goes offline, even after the Kafka cluster assigns a new leader for the partition. This is true even if the method is called with retries enabled - all retries will be exhausted and finally an error will be raised
I noticed the Kafka::Producer interface does not have this issue: when the leader broker goes offline, subsequent retries are able to fetch the new topology from the cluster, and the producer is able to target the newly assigned leader
Upon inspection, both Kafka::Client#deliver_message and Kafka::Producer#deliver_messages_with_retries will mark the cluster as stale whenever they are not able to connect to the broker:
After the cluster is marked as stale, and upon subsequent retries, Kafka::Producer#deliver_messages_with_retries will trigger a metadata refresh on the cluster, as expected:
This means that retries in Kafka::Client#deliver_message will always try to hit the leader broker that it can find in the current cache in memory, and this does not account for the situation where the broker is no longer available
I have a working fix which consists of merely adding @cluster.refresh_metadata_if_necessary!just before this line, I will submit it in a PR after I run more extensive tests in our live cluster
When using
Kafka::Client#deliver_message
, the producer is not able to recover when the broker who is the leader of the partition goes offline, even after the Kafka cluster assigns a new leader for the partition. This is true even if the method is called with retries enabled - all retries will be exhausted and finally an error will be raisedI noticed the
Kafka::Producer
interface does not have this issue: when the leader broker goes offline, subsequent retries are able to fetch the new topology from the cluster, and the producer is able to target the newly assigned leaderUpon inspection, both
Kafka::Client#deliver_message
andKafka::Producer#deliver_messages_with_retries
will mark the cluster as stale whenever they are not able to connect to the broker:After the cluster is marked as stale, and upon subsequent retries,
Kafka::Producer#deliver_messages_with_retries
will trigger a metadata refresh on the cluster, as expected:However,
Kafka::Client#deliver_message
does not trigger the metadata refresh:This means that retries in
Kafka::Client#deliver_message
will always try to hit the leader broker that it can find in the current cache in memory, and this does not account for the situation where the broker is no longer availableI have a working fix which consists of merely adding
@cluster.refresh_metadata_if_necessary!
just before this line, I will submit it in a PR after I run more extensive tests in our live clusterSteps to reproduce
Not so trivial. You should have a cluster with at least 3 brokers running, so that it's able to run an election when one of the brokers is killed.
Leave this code running, and kill the broker who is the leader of the partition 0:
Expected outcome
deliver_message
retries, fetching the new topology from the cluster, and resume producing messages to the newly elected leaderActual outcome
deliver_message
retries always trying to produce messages to the broker that was killed, eventually raising an errorThe text was updated successfully, but these errors were encountered: