-
Notifications
You must be signed in to change notification settings - Fork 653
Rethinking the bounded mpsc channel #800
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
And here's what we think a new futures-based channel might look like, more or less: Click to expandpub fn unbounded<T>() -> (Sender<T>, Receiver<T>);
pub fn bounded<T>(cap: usize) -> (Sender<T>, Receiver<T>);
struct Receiver<T> { ... }
unsafe impl<T: Send> Send for Receiver<T> {}
impl<T> Clone for Receiver<T> { ... }
impl<T> Receiver<T> {
pub fn try_recv(&self) -> Result<T, TryRecvError>;
pub fn recv(&self) -> RecvFuture<T, RecvError>;
pub fn poll_ready(&mut self) -> Poll<(), RecvError>;
pub fn is_empty(&self) -> bool;
pub fn len(&self) -> usize;
pub fn capacity(&self) -> Option<usize>;
pub fn close(&self) -> bool;
pub fn is_closed(&self) -> bool;
}
impl<T> Stream for Receiver<T> {
type Item = T;
type Error = ();
// ...
}
struct Sender<T> { ... }
unsafe impl<T: Send> Send for Sender<T> {}
impl<T> Clone for Sender<T> { ... }
impl<T> Sender<T> {
pub fn try_send(&self, msg: T) -> Result<(), TrySendError<T>>;
pub fn send(&self, msg: T) -> SendFuture<(), SendError<T>>;
pub fn poll_ready(&mut self) -> Poll<(), SendError<()>>;
pub fn is_empty(&self) -> bool;
pub fn len(&self) -> usize;
pub fn capacity(&self) -> Option<usize>;
pub fn close(&self) -> bool;
pub fn is_closed(&self) -> bool;
} cc @cramertj |
@stjepang @danburkert Interesting! Can you explain more about why your preferred design is incompatible with the current |
Sure, with a bit of background: For a futures-aware channel to function correctly, a blocked Sender task must be notified when capacity becomes available (there is a reciprocal arrangement necessary for the Receiver with an mpmc queue, but I'm going to focus on just the Sender for now). There's a delicate balance here, though: you don't want to wakeup every blocked sender if there is just capacity to send a single item. Otherwise you risk a 'thundering herd' issue where many senders are competing for that single unit of capacity, which is really just wasted wakeups and effort. The simplest solution to the 'thundering herd' is to have the Receiver notify a single blocked task for every item it receives from the channel. This is problematic though, because notified tasks are not guaranteed to actually send an item to the channel. The idea @carllerche had to work around this is to have In my opinion the |
I glossed over it, but if the notified task doesn't push on to the channel, it's termed a 'lost wakeup', which can lead to deadlock, since other parked tasks will never get notified. |
Couldn't you add this impl to the |
That's an interesting idea, it might work. |
(FWIW I've used the same trick for allowing multiplexed reads on sockets and channels in the past-- keep a list of readers to awaken, mapped to by the ID of the response they're waiting for. When the socket becomes readable, awaken one of them to perform the read, and then have it notify the one who the message is actually for. If the notified read handle is dropped, it awakens a different reader.) |
The strategy works w/ |
I guess that I should specify that the newly proposed Sink API could support this strategy. |
Here's a proof-of-concept implementation which leverages some of the crossbeam-channel internals: https://github.com/danburkert/crossbeam-channel/blob/futures-channel/src/futures/mpsc.rs. Using the strategy @cramertj suggested it's able to keep the If anyone wants to pick it up and run with it please go ahead. |
@danburkert, I have a small program that i can try this on, will definitely give you feedback. |
just read through the code, |
Right, the proof of concept is an mpsc channel. I think it could be extended to be an mpmc channel, but it would be a bit more complex. I've never had a need for an mpmc channel with futures so I decided to skip that for the first cut. |
I pushed a commit to https://github.com/danburkert/crossbeam-channel/tree/futures-channel which greatly expands the implementation comments so that it should be easier to understand the internal channel mechanisms. Hopefully this should make it easier for others to review/extend the implementation. |
EDIT: Moved the comment here since it is not really on topic. Click to expandSince we're rethinking our channel design, I'd like to take a step back and Here goes a lengthy motivational intro, but bear with me. Or skip it. :) Motivation:
|
@stjepang is your comment meant to be in the I do have some thoughts on the specific points, though:
|
@danburkert Yes - in hindsight, I thought the comment would be more relevant to |
Thanks @stjepang! Those are definitely interesting questions as applied to synchronous channels. Another difference between a sync and async |
Hi, perhaps this may eventually need to be an RFC, but I want to get a conversation started with a wider audience about a problematic aspect of the current bounded mpsc channel implementation. Quoting from internal implementation comments:
So, every time a
Sender
is cloned, a new 'slot' is allocated internally, and the first item pushed to thatSender
is guaranteed to succeed without blocking. The most common and easiest way to push a value into the channel is viaSink::send(..)
, which takesself
by value.Bounded mpsc channels are commonly used to send requests to capacity-limited services (tasks). It's critical in such a scenario that the mpsc channel be bounded so that it can apply back-pressure to callers of the service, otherwise 'buffer bloat' can occur where requests are arbitrarily delayed while sitting in internal queues.
Unfortunately, the 'slot' semantics of the current bounded mpsc channel, along with the API of
Sink::send
means it's very difficult to actually get the bounded sender to apply back-pressure. In circumstances where the caller needs to potentially retry the call if the response is a failure, and thus keep around the Sender as part of the response future, it becomes impossible.I've been looking into how the current channel implementation might be changed to remove the 'slot' mechanism. There's a promising strategy that @carllerche and @stjepang and I have discussed that would require dropping the
Sink
impl fromSender
and havingSender::send
return a custom future type, but there's not working prototype yet. We've also been looking at how the internals ofcrossbeam-channel
might be leveraged to help, which relates to crossbeam-rs/crossbeam-channel#22. Usingcrossbeam-channel
internals may also open the door to making the channel mpmc.I'm interested to see if others have run into the backpressure/slot issue, and if there is support behind changing the default mpsc bounded channel which ships with
futures
if a suitable solution can be found.The text was updated successfully, but these errors were encountered: