Skip to content

rabbitmq hangs from a few of seconds to more minutes #427

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
cuiweixie opened this issue Nov 18, 2015 · 1 comment
Closed

rabbitmq hangs from a few of seconds to more minutes #427

cuiweixie opened this issue Nov 18, 2015 · 1 comment
Labels
mailing list material This belongs to the mailing list (rabbitmq-users on Google Groups)

Comments

@cuiweixie
Copy link

In my testing, there is a rabbitmq cluster (also mirrored) consist of server node A and server node B,with two vhost test5a、test5b,either of two vhost has one direct exchnage and one queue, the Producer posts message to the cluster and need confirm, the Consumer receive message and need ack. During the testing, I encounter two problems:

1 one queue testing: When the rate of producing message is more than 2000 per queue, rabbitmq hangs from a few of seconds to more minutes periodically, so not receive message from Producer and send message to Consumer, but after that rabbitmq can receive message again.
2 two queues testing:When the number of Accumulation message from either queue described above more than 100K, rabbitmq hangs from a few of seconds to more minutes periodically like 1.

the qps for one queue testing normally is about 2500, meanwhile the qps for twp queues testing normally is about 4000.

question:
1 Anybody can help me solve the problems above?
2 When the rate of producing message more than the rate of consuming message, how do traffic flow control?

the server infromation:
Linux NodeA 2.6.32-279.el6.x86_64 #1 SMP Fri Jun 22 12:19:21 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux

the status of node A:
Status of node NodeA ...
[{pid,2204319},
{running_applications,[{rabbit,"RabbitMQ","3.5.2"},
{os_mon,"CPO CXC 138 46","2.2.14"},
{mnesia,"MNESIA CXC 138 12","4.11"},
{xmerl,"XML parser","1.3.5"},
{sasl,"SASL CXC 138 11","2.3.4"},
{stdlib,"ERTS CXC 138 10","1.19.4"},
{kernel,"ERTS CXC 138 10","2.16.4"}]},
{os,{unix,linux}},
{erlang_version,"Erlang R16B03 (erts-5.10.4) [source] [64-bit] [smp:24:24] [async-threads:30] [hipe] [kernel-poll:true]\n"},
{memory,[{total,100405648},
{connection_readers,28080},
{connection_writers,8080},
{connection_channels,56760},
{connection_other,61688},
{queue_procs,87288},
{queue_slave_procs,199984},
{plugins,0},
{other_proc,14621080},
{mnesia,153240},
{mgmt_db,0},
{msg_index,63912},
{other_ets,1003672},
{binary,2261112},
{code,16659268},
{atom,602729},
{other_system,64598755}]},
{alarms,[]},
{listeners,[{clustering,25672,"::"},{amqp,5672,"::"}]},
{vm_memory_high_watermark,0.4},
{vm_memory_limit,26998248243},
{disk_free_limit,50000000},
{disk_free,190861848576},
{file_descriptors,[{total_limit,635260},
{total_used,5},
{sockets_limit,571732},
{sockets_used,3}]},
{processes,[{limit,1048576},{used,191}]},
{run_queue,0},
{uptime,23387}]

@michaelklishin
Copy link
Collaborator

Please ask questions on the mailing list. This is most likely inefficient disk paging that leads to flow control, see #227 and related.

@michaelklishin michaelklishin added the mailing list material This belongs to the mailing list (rabbitmq-users on Google Groups) label Nov 18, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
mailing list material This belongs to the mailing list (rabbitmq-users on Google Groups)
Projects
None yet
Development

No branches or pull requests

2 participants