-
-
Notifications
You must be signed in to change notification settings - Fork 1.6k
convert to use tokio 0.1 #1462
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
convert to use tokio 0.1 #1462
Conversation
ed10c2a
to
52796df
Compare
Woo! Nice! (And sorry about 0.12.x, I had been rebasing master onto it for a while, assuming most people weren't actually depending on its history 😭 ). |
No worries, I was just surprised by how quickly it had moved since I started a couple of days ago :) One thing to note here is that I've tried to keep the API mostly the same (biggest exception being |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is fantastic! 🎉 So much goodness!
Looks like there's a hanging test, client::tests::retryable_request
...
.bind(&addr, new_service) | ||
.unwrap(); | ||
|
||
println!("Listening on http://{} with 1 thread.", server.local_addr().unwrap()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
All the lines printing about using 1 thread can probably be adjusted now :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, will do!
let mut core = tokio_core::reactor::Core::new().unwrap(); | ||
let handle = core.handle(); | ||
let client = Client::new(&handle); | ||
tokio::run(lazy(move || { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm curious what the lazy
usage is for. Is that because calling all this code outside of tokio::run
will panic?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, most constructors including Client::default
now need a thread-local handle. When we change the API I think we should probably defer most or all I/O (like port binding) until the future is polled, but for now that's the compromise I made.
The hanging test is a weird one, it works on my machine (OS X), so I can't reproduce at the moment. I'll grab an AWS instance later to try and reproduce on Linux. |
Don't bother! I can debug it myself. |
Thanks, much appreciated! In other news, my PR to split off |
For the hanging test, it's a combination of mutexes in the mock duplex code, and using the The test is specifically designed to be a series of events we can control, because its testing those series of events. I suspect that switching to the tokio's |
Hmm, so the issue with The only way I can think of solving that problem is to make it optional for Maybe there is another way to fix the tests so they could run on a multi-threaded executor? |
52796df
to
78f13ee
Compare
Okay, so I think I fixed the initial problem by using a thread pool of size 1 as an executor. There are some other failures now on Travis that I still can't reproduce locally :/ Any chance you could have another look? |
7e0c6a1
to
cd0e494
Compare
I'll look into that test failure. Besides that, the benches don't compile XD. |
So the failing keepalive client test is because of tasks being on separate threads, but with only 1 CPU, the order isn't preserved. Well, on one hand, it's good to find poorly written tests that break on multithreaded! The test itself is ensuring that after we get a response back, a second request to same host should reuse the existing connection. However, knowing that the connection can be reused requires the Dispatch task to do some housekeeping after submitting the response back to the FutureResponse task. So, by the time the main thread has the response and starts a new request, the Dispatch task on the other thread hasn't had a chance to cleanup state and set itself ready for a new request, so it isn't back in the client's pool. As a hack, we could just sleep the main thread for a few milliseconds, allowing the dispatch task to clean up... Perhaps in a separate issue, the design of that can be changed, since changing this test definitely means that users may see on occasion requests not reusing a connection it could have. |
Thanks for providing more info on that test! I thought I fixed that particular test earlier by introducing exactly the sort of The latest Travis run showed two other tests failing instead: one with a ‚resource temporarily not available‘ error, and the other one (a dispatch test) timing out. I‘ll try to fix the benches later! |
I've got the benches passing, and fixed up that test, in https://github.com/hyperium/hyper/tree/new-tokio. I've been pushing to that to trigger CI, and looking at some panics. Seems one was due to a race condition in the futures mpsc channel, so I've a work around there. Waiting for CI again... |
Yay! CI is green! I'll see about merging this ... probably tomorrow, its getting late x_x |
cd0e494
to
6b74077
Compare
Co-authored-by: Sean McArthur <[email protected]>
6b74077
to
1d845f3
Compare
@seanmonstar Nice! I pulled your changes into this branch, just in case you had any other feedback you wanted to see addressed before merging. AppVeyor builds seem to be failing both on this as well as your |
The issue on Windows was that we needed to call I merged those changes into your commit, and merged it into 0.12.x here: 603c3e4 Thank you sooo much! |
All the tests pass! 🎉 (at least locally -- let's see what CI says)
Spend this morning rebasing due to 0.12.x moving so fast!
Fixes #1443