-
Notifications
You must be signed in to change notification settings - Fork 359
avoid static lifetime requirement for handler functions #310
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
We use For your specific example I would suggest reading this why there is the requirement for the The above linked example on the |
I know how to solve the problem and I am fully aware of what a static lifetime is and why the requirement exists. re:
Given the code today, the code is essentially reduced to: while let Some(event) = incoming.next().await {
// ..
let task = tokio::spawn(async move { handler.call(body, ctx) });
let req = match task.await {
Ok(response) => match response.await {
_ => (),
},
Err(err) if err.is_panic() => {
// ...
}
Err(_) => unreachable!("tokio::task should not be canceled"),
};
// reply to client
} If we clean it up a bit and add an example "handler". let handler = |body, ctx| {
// lets call this stage 1
println!("stage 1");
async move {
// lets call this stage 2
println!("stage 2");
}
}
while let Some(event) = incoming.next().await {
// ..
let task = tokio::spawn(async move { handler.call(body, ctx) });
let req = match task.await {
// `stage 1` was printed
Ok(response) => match response.await {
_ => { /* `stage 2` was printed */ },
},
_ => ()
};
} So the flow is:
So to me, even if you do spawn, you are not spawning the expensive-part which is probably in I can't see how this processes multiple events concurrently. |
I have never seen more than one invocation per lambda instance at a time from AWS. It would either start a new instance or throttle the request until a lambda instance becomes available. So even if we can process multiple requests it never happens. |
If changes proposed by @rasviitanen don't go ahead I suggest we add an example for using Arc with a shared resource as per #310 (comment) |
@rasviitanen Sorry, I think I misread your initial request and may have come off a bit condescending. That was not my intent. Looking at the code section more carefully I think you're correct in that we're not running these events concurrently now, and that our use of Considering the next is just the runtime calling that endpoint I think there is chance to handle multiple requests concurrently, but it would mean that multiple would have to be queued at once. If the answer is that we don't care about executing the invocations concurrently then I think that @rasviitanen is correct in that we could remove the Based on this snippet from the runtime doc I think we do want to be executing concurrently here, so this loop may need to be reworked: So overall I think @rasviitanen is correct that we don't need the |
@bahildebrand no problem! I think you are right that the correct solution here isn't to remove the spawn, but rather to fix the loop. Given that the interpretation is correct and they don't mean 'after the whole invocation'. I'm not 100% sure about it, I am leaning towards not running the loop concurrently. Thanks for looking into this btw! |
To continue the tradition of bouncing back and forth. @rimutaka brought up a good point in a separate thread. Handling these invocations in parallel would break the model for resource management in lambda. I think you were correct in that we can probably remove the After playing around with refactoring this loop a bit I think you're on the right track. I should have some more time later this week to dig into this further, and I can get back to you then. |
@bahildebrand Cool, thank you! I have been thinking about the behaviour when creating the task panics, i.e. |
I think the main benefit of The error messages logged by the runtime are sent to stdout and end up in CloudWatch. The would need to be sent back to AWS API endpoint as a response for the caller of the lambda to see them. |
In response to @davidbarsky 's comment (big change, small gains) in PR #309 I started looking into the possible performance gains of zero-copy. David Tolnay, the creator of Serde kindly shared some code we could use for benchmarking. See his full reply on Reddit. David's sample benchmark:
Lambda's max payload size is 6MB which can be all in a single base64 blob or spread across thousands of JSON properties, so any gains will be different for different payloads. @rasviitanen Rasmus, do you want to try it with your type of payload and share the numbers? David's code:// [dependencies]
// bincode = "1.0"
// serde = { version = "1.0", features = ["derive"] }
#![feature(test)]
extern crate test;
use serde::Deserialize;
#[derive(Deserialize)]
pub struct Copied {
pub id: Vec<u8>,
}
#[derive(Deserialize)]
pub struct ZeroCopy<'a> {
pub id: &'a [u8],
}
fn input() -> Vec<u8> {
let mut input = Vec::with_capacity(1024);
input.extend_from_slice(&[248, 3]);
input.extend_from_slice(&[0; 1022]);
input
}
#[bench]
fn copied(b: &mut test::Bencher) {
let input = input();
b.iter(|| bincode::deserialize::<Copied>(&input).unwrap());
}
#[bench]
fn zerocopy(b: &mut test::Bencher) {
let input = input();
b.iter(|| bincode::deserialize::<ZeroCopy>(&input).unwrap());
} |
@rasviitanen sorry for not responding in a while, this fell off of my radar. Were you able to run @rimutaka code and get any results off of your branch? Additionally looking at it a bit more I think @rimutaka is correct in that the spawn call is here for isolating the handler from the runtime in case of a panic. I think that would need to be taken into account if we wanted to remove the |
@bahildebrand Sorry I forgot about this too. I'll see if I can find some time to test it, we have some different payloads I can test. re:
This is what I have been suggesting from the start. A possible solution is to use |
@bahildebrand It looks like the issue for the serde stuff was closed, I guess I can skip the benchmarks? |
Yeah, I think you're fine. Feel free to open a PR with these changes so we can look at it more in depth. |
@bahildebrand sure! Here is the PR: #319 |
|
Thanks for the info @rimutaka Question about second point, what to you have in mind when you say "more elegant way"? Use of mutex is not that terrible 😀 If performance is consideration, IMHO mutex will do just fine. As function is single threaded, there should be no mutex contention thus lock/unlock should be cheap as no syscall will be made. |
Hello there, first thanks for allowing sharing resources between lambda's invocation. That's awesome ! Is there any plan to remove that restriction in the |
PR merged, hope that helps :) |
We are just starting out with rust and had some confusion on this issue using |
I would like to remove the requirement of having
'static
bounds on the handler generics if possible. This would make it easier to pass in shared resources that have been setup/created before creating the handler and avoid a bunch of cloning.It seems like a
tokio::spawn
is used to produce a future here:aws-lambda-rust-runtime/lambda-runtime/src/lib.rs
Lines 160 to 162 in d3ff435
By taking a guess, I would assume that the reason is to catch a panic when generating the future (the JoinHandle is immediately awaited and nothing else seems to be going on).
I might very well be wrong here, so if the reason for using a
tokio::spawn
has to do with something else, please correct me.Since
tokio::spawn
requires both the future and the output to be'static
, all the handler generics needs to be bounded by'static
. This makes it quite cumbersome to use shared resources. Consider:This would not compile, as
client_ref
is not'static
. To me it would make sense if you were able to do some initial setup (e.g. database clients/initializing env-configs etc) before callingrun
.I was thinking it would be possible to remove the static requirements by removing
tokio::spawn
?Perhaps it can be replaced with a
catch_unwind
?I have been experimenting a bit here (I added an example as well):
rasviitanen@9f9b674
Would this work or would it result in other problems I have not considered?
PS great work on the 0.3 release! :)
The text was updated successfully, but these errors were encountered: