-
Notifications
You must be signed in to change notification settings - Fork 2.3k
utilities.IOReaderFactory is inefficient #3714
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi, thanks for your issue report. I'm sorry to hear that you had a bad experience. I think there is something we can do here. We introduced the grpc-gateway/protoc-gen-grpc-gateway/internal/gengateway/template.go Lines 336 to 354 in 6dff994
$AllowPatchFeature is true, and in other cases we can use req.Body directly. Would you be willing to make such a contribution?
|
Changes request RPCs to use req.Body instead of reading into an in memory byte slice via IOReaderFactory. The IOReaderFactory logic is still available for the PATCH field_mask feature. fixes grpc-ecosystem#3714
I went ahead and made the change #3727 |
* fix: use req.Body instead of IOReaderFactory when possible Changes request RPCs to use req.Body instead of reading into an in memory byte slice via IOReaderFactory. The IOReaderFactory logic is still available for the PATCH field_mask feature. fixes #3714 * chore: regenerate example files --------- Co-authored-by: Eddy Leung <[email protected]>
🐛 Bug Report
utilities.IOReaderFactory
relies onio.ReadAll
which is documented to be inefficient for large inputs.For our use case, we had a 200MB JSON body that resulted in > 5GB transient memory allocations.
It looks like in most cases, the
req.Body
could be passed directly into the marshaler. The exception to that is when field_masks are used then the body is parsed twice.To Reproduce
Create a proto definition with repeated fields and send a relatively large input payload to monitor memory utilization.
Expected behavior
Memory utilization doesn't grow several times the input size.
Actual Behavior
Our 200MB input turned into more than 5GB memory allocations that the GC had to reclaim. This transient explosion in memory usage causes our service to run out of memory.
The text was updated successfully, but these errors were encountered: