-
Notifications
You must be signed in to change notification settings - Fork 284
Fetch the entire payload within the pipeline #2073
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
One question that arises is whether we should advertise a way for customers to opt into this same behavior of not buffering the entire payload e.g., a field on That said, I should clarify that we shouldn't try to deserialize in the pipeline. Deserialization is meant to be late but does not need to be async. This lets customers still grab the buffered response and do whatever - save it, deserialize their own types, whatever - without I'll update the OP. |
We discussed this today and decided on a pattern that would allow most client methods to work like so: let secret = client.get_secret("name", "version", None).await?.into_body()?; The entire response is buffered into memory by default. Deserialization happens on that buffer and is not, therefore, async. This also should provide opportunity to attach the raw response to the To support downloading large payloads - or for any case where a customer might otherwise want to stream the response - all let response = client.download_blob("blob", None).await?; // get at least the headers
let mut stream = response.into_stream();
while let Some(buf) = stream.next().await? {
// e.g., write buf to file
} While we'll still have helpers to deserialize into custom model types attached to let content: Vec<u8> = stream.try_collect().await?;
let m: Model = serde_json::from_slice(&content); This does mean that Note: if the HTTP status code is not an acceptable success code (see #1733), we should always buffer the entire error response in the first |
To clarify, the Aside from that clarification (which we did cover in the meeting, just wanted to get it here in writing), this sounds good to me! |
Thank you, @analogrelay! Good catch on that. Yes, and we already have at least partial support with our |
Uh oh!
There was an error while loading. Please reload this page.
Based on our discussion (see OP history for context), we decided on a pattern that would allow most client methods to work like so:
The entire response is buffered into memory by default. Deserialization happens on that buffer and is not, therefore, async. This also should provide opportunity to attach the raw response to the
ErrorKind::HttpResponse
, on which we could provide deserialization helpers but would not deserialize by default.To support downloading large payloads - or for any case where a customer might otherwise want to stream the response - all
Response<T>
would support something like:While we'll still have helpers to deserialize into custom model types attached to
Response<T>
(see #1831 (comment)), this would still allow customers to do something like this if, say, a blob were a structured model or for any model response:This does mean that
into_body()
et. al. are implemented only for something likeResponse<T> where T: Deserialize
, so pure streaming methods need to return a type that would never implementDeserialize
but can stream, like our ownResponseBody
or something. Or maybe we return aResponseBody
in lieu ofResponse<T>
.To clarify, the
into_stream
method does not change anything in the pipeline itself (because, in fact, it's called too late for it to do so), so if you called it onget_secret
's response, you'd end up with a stream that yields all of its bytes synchronously. Separately to adding thatinto_stream
method we'll be changing the pipeline to eagerly read the entire body unless a special flag is provided in theContext
.Note: if the HTTP status code is not an acceptable success code (see #1733), we should always buffer the entire error response in the first
await
call so it's available onErrorKind::HttpResponse
(see #2495).The text was updated successfully, but these errors were encountered: