-
Notifications
You must be signed in to change notification settings - Fork 1k
Requests end up interleaved on Ruby 2.1 when Timeout module is used #501
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Given this only manifests on Ruby 2.1, I'm actually leaning towards it being a bug in Ruby itself (specifically in the 2.1+ implementation of Timeout) |
Oh strange it only affects only 2.1, yep probably you are right. In the general form the bug looks like something every client out there should have a regression for... very easy to have or re-introduce it. |
It might very well be the |
One thing to note, is that while the timeout issue is the most common case this happens, killing a thread can cause it too regardless of version (or more accurately, I've only tested on 2.0/2.1/JRuby 1.7). Keeping track of whether or not you have a request outstanding when writing data on anything that's not thread safe handles all of the corruption problems though (like in mperham/connection_pool#67, mperham/connection_pool#70) |
This is because the timeout is very high 0.5s -- setting it to 0.001s manifests the issue after one iteration.
Definitely manifests on 2.2 as well, but interestingly not on 2.0/rbx/jruby. |
I've run into this one as well, and I've seen similar bugs happen with the ruby mogilefs client. I suspect what it is is that you've got data in the socket's buffer from the previous request ( the one that the timeout kills before actually reading from it ), and the subsequent read just gets whatever's in the buffer ( i.e. the response from the previous request ). On the next request, this happens all over again: the response from the previous request is in the socket's buffer, that's read before the new one's had a chance to make it in, and so on. The solution we came up with was to just reset the connection whenever we hit a timeout, that way we know for sure whatever result comes up next it'll be fresh. Unless the intent is to add manage request timeouts in the lib, i'm not sure there's anything we can do to protect users from setting this off with a timeout in their own code. |
The hiredis backend already supports native timeouts. I mostly opened this for clarity around what was happening. It seems the answer is (as I suspected) that Timeout is inherently unsafe and shouldn't be used. |
I implemented a possible fix in #502. Please take a look and provide feedback there. Thanks! |
Fixed in 3.2.1. |
If the thread is abruptly terminated (`Thread#kill`), or an exception is raised asynchronously (`Thread#raise` or `Timeout.timeout`), it's possible to leave the connection with pending reads. If the connection is then re-used, the next call can potentially read responses to the interupted commands. Ref: redis/redis-rb#501
The following snippet is able to reproduce a condition whereby the response of one command is never consumed, and the next command ends up consuming the response from the previous command:
https://gist.github.com/tarcieri/303c5dcf803b0e5d021d
Here's another reproduction of the same issue, I believe:
https://gist.github.com/stevehodgkiss/10204388
The issue manifests for me after 1000+ iterations of the loop. It only manifests on Ruby 2.1, and not on previous versions.
It may very well be a bug in Ruby's Timeout implementation, but I thought I'd start here.
The text was updated successfully, but these errors were encountered: