-
Notifications
You must be signed in to change notification settings - Fork 1.1k
Add discussion on client re-renders, batched responses #692
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Co-Authored-By: Benjie Gillam <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this comprehensive discussion! I added some comments regarding each option as inline comments.
### Client re-renders | ||
|
||
With incremental delivery, where multiple responses are delivered in one request, client code could re-render its UI multiple times in a short period of time. This could degrade performance of the application, negating the performance gains from using `@defer` or `@stream`. There are a few approaches that could be taken to mitigate this. Each of these approaches are orthogonal to one another, i.e. the working group could decide that more than one of these should be included in the spec or be labeled as best practices. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe worth mentioning that the server code also throttles payloads delivering to conform to the response format in addition to the client code in terms of degrading performance.
rfcs/DeferStream.md
Outdated
It could then process each of these before triggering a re-render. | ||
|
||
2. __Client side debounce.__ GraphQL clients can debounce the processing of responses before triggering a re-render. For a query that contains `@defer` or `@stream`, the client will wait a predetermined amount of time starting from when a response is received. If any additional responses are received in that time, it can process the results in one batch. This has the downside of adding latency; if no additional responses are receiving in the timeout period, the processing of the initial response is delayed by the length of the debounce timeout. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might worth calling out the actual complexity of finding the predetermined amount of time for debouncing. In practice, we could use logging to analyze historical performance data to come up with a "magic number". However, as query continues to evolve (add/remove fields, add/remove @defer and @stream) and server implementation changes over time, the number may not be optimal anymore.
Developers either need to revisit this number from time to time, or write a process/job that runs periodically and automatically finds a new number. Both of which are of pretty significant complexity.
rfcs/DeferStream.md
Outdated
|
||
2. __Client side debounce.__ GraphQL clients can debounce the processing of responses before triggering a re-render. For a query that contains `@defer` or `@stream`, the client will wait a predetermined amount of time starting from when a response is received. If any additional responses are received in that time, it can process the results in one batch. This has the downside of adding latency; if no additional responses are receiving in the timeout period, the processing of the initial response is delayed by the length of the debounce timeout. | ||
3. __Server sends batched responses.__ This approach changes the spec to allow GraphQL to return either the current GraphQL Response map, or a list of GraphQL Response maps. This gives the server the flexibility to determine when it is beneficial to group incremental responses together. If several responses are ready at the same time, the server can deliver them together. The server may also have knowledge of how long resolvers will take to resolve and could choose to debounce. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My intuition is that this would be the direction we want to pursue precisely for the flexibility you mention here. When experimenting at Facebook, we tried client side debouncing first, and due to the complexity of figuring out a reasonable interval, we ended up with a naive debouncing which is always rendering twice, once for the initial payload and once after all incremental payloads have arrived.
That turns out to be not granular enough, so we implemented server side batching. One complication is that the GraphQL executor actually doesn't have a good sense of how expensive a resolver is. Therefore, it's hard for the executor to make a decision on the fly while executing a query. It's more applicable if we outsource the decision of how to batch to the underlying @stream and @defer resolvers. In the end, we only implemented batching for @stream since a global batching at the query level means @deferred and @streamed resolvers need to communicate with each other in some sense.
I wonder if we can offer practical suggestions for servers to implement server side batching. If not, should we instead propose something like @stream level batching?
@Keweiqu thanks for the comments, I added some more information about debouncing as you suggested. |
@robrichard Can you please rebase so I can merge it? |
Adding some potential approaches to solving the issue of clients re-rendering in quick succession due to
@defer
or@stream
payloads. This was discussed previously in this comment: #679 (comment).