[Other] Allow for competing consumers to the same stream? How do we implement this? #13794
-
Community Support Policy
RabbitMQ version used4.0.3 How is RabbitMQ deployed?Community Docker image Steps to reproduce the behavior in questionWe are currently using rabbitmq streams for event messaging, messages are published onto the stream with various headers (such as topic type and action type). We have multiple consumers that consumer from the stream and process messages that they care about with stream filtering. Each consumer only care about a subset of the stream messages. Thinking about the future, we may need to scale up some consumers. It makes sense that we can vertically scale the consumers, but how would we go about horizontally scaling consumers in such a way that they are not repeat processing messages? I would assume that if we choose to horizontally scale consumers, we will lose the ordering guarantee that streams provide, but for certain consumers, we know that this is a tradeoff that we are able to make. The requirements/assumptions are:
A few ideas that I have:
Not sure if I'm describing the problem well, but I'm wondering if anyone has better ideas on the idea of horziontally scaling consumers? |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 5 replies
-
Consumers on a stream are competing (with repeatable non-destructive reads) by definition, unless the Single Active Consumer is enabled. You already know about stream filtering (plus a blog post on the topic). There's nothing wrong with your idea of a "dispatching consumer" that would republish some messages but I'm not sure I understand why the combination of regular (non-SAC) consumers would not be sufficient if you use a stream protocol client. |
Beta Was this translation helpful? Give feedback.
-
RabbitMQ Super streams may be an option although you have to pre-partition you workload so wont be able to scale consumption dynamically but you can retain order based on the key you use for partition selection. |
Beta Was this translation helpful? Give feedback.
If you want destructive consumption then streams, by definition, are not what you are looking for.
Although you can make publishers publish to a stream, then have a "dispatching app" relay some messages to N queues each with C competing consumers.
I don't see anything wrong with this idea except that a single stream will outpace quite a few quorum queues, and such a setup will require a reasonable number of cores on each node (4 seems to be a realistic minimum to me given the parallelism expectations).