Message rates drops to 0/s across all queues randomly #1836
Replies: 1 comment 1 reply
-
As a quick answer: the problem may not be related to RabbitMQ as such. It could be a problem with your client applications or network between the clients and RabbitMQ. RabbitMQ monitoring might help you, but application monitoring would be crucial - after all it's your applications that consume messages and they don't have to do this at a consistent rate |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I have a RabbitMQ cluster running on OpenShift in a data center, with approximately 3,000 queues. The cluster is configured almost as default using the RabbitMQ Cluster Operator and has been allocated sufficient resources.
Messages are being sent to the queues, but they are not being consumed at a consistent rate. At times, there are around 500K messages in the queue, with fluctuations in consumption close to <100 messages/s. However, at random intervals, with no apparent pattern, message throughput drops to 0/s—including incoming messages, deliveries, and acknowledgments.
The cluster has persistence enabled, which creates an LDEV on the SAN storage pool for each of the three node volumes. Disk IOPS for each volume reaches around 4000, but there are fluctuations. The limit for each LDEV is 33K IOPS, assuming a 50:50 read/write ratio.
Despite these issues, the RabbitMQ cluster pods do not report any warnings or errors related to this behavior. I would appreciate any insights on potential causes and troubleshooting steps.
Environment Details:
RabbitMQ Version: 3.13.7
Erlang Version: 26.2.5.9
Nodes in Cluster: 3
Beta Was this translation helpful? Give feedback.
All reactions