-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
connect failures cause queries to black-hole #718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Is there any update on this issue? |
Using the query queue isnt really recommended if you need consistent error handling. I recommend not issuing queries until after the connection is successful and then using async flow control to send queries only after others succede |
I'm not looking for sophisticated error handling. I'm just looking to maintain the invariant that if I kick off an asynchronous operation with a callback, then the callback will eventually be invoked. If the queue doesn't provide that, shouldn't it be an assertion failure to issue more than one query at once, at least by default? I totally understand if fixing this isn't a priority. But the behavior seems extremely non-idiomatic, and I'm not sure how someone could use the queueing interface in a correct program. |
Yah I totally agree with you. It is non-idiomatic. This lib actually predates most node idioms interestingly, which is why there is no test framework other than executing files and calling As far as throwing if 1 query is issued before another one that's a pretty big backwards incompatible change with not much upside so I haven't done it. I feel your pain though, trust me, maintaining bad design decisions isn't always the most fun, and the occasional confusion they cause is painful to me. All that being said I'm open to a PR that causes other queries to actually properly cause error callbacks to fire - it never hurts to improve what we have. 😄 |
This seems very similar to a situation I'm experiencing. How should one opt-out of the query queueing mechanism? I'm concerned that over time my processes will run out of memory with callbacks that have not been satisfied. |
@brianc What do you think about pulsing the query queue regularly and if the underlying stream has ended throwing a connection ended on all the remaining queries to be processed? It's unfortunate to be able to enqueue a query when the underlying stream has been destroyed. |
For those interested in a work around: Just periodically check to see if the stream was destroyed. I may have been able to do this with events and an initial check upon query. However, this is a good start to preventing our applications from stalling. |
This may be a dup of #632, but in my case, I'm not using a connection pool at all. I've found that if I issue a connect(), then queue up a few queries, and the connect() fails, I never get the callbacks for my queries. Similarly, if I issue a connect(), then queue up a few queries, and then the connect() succeeds, but I immediately call client.end(), then I get a callback for only one of the two queries. I understand why it's happening, but it doesn't seem like the desired behavior. I'd expect that when the connect() fails, any queries that have been queued would fail with an error indicating that the client is not connected.
Here's a pretty simple test program that shows all three cases:
If I run it with a valid connection string, I get the expected results from the two queries:
If I run it with the wrong port number in the connection string, I get the connect() error message and no query callbacks:
If I run it with the valid connection string again but give the --close-on-connect flag, the program calls client.end() immediately upon connect(), and I get exactly one callback:
The text was updated successfully, but these errors were encountered: