Skip to content

Add discussion on client re-renders, batched responses #692

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Apr 6, 2021

Conversation

robrichard
Copy link
Contributor

@robrichard robrichard commented Feb 25, 2020

Adding some potential approaches to solving the issue of clients re-rendering in quick succession due to @defer or @stream payloads. This was discussed previously in this comment: #679 (comment).

Co-Authored-By: Benjie Gillam <[email protected]>
Copy link
Contributor

@Keweiqu Keweiqu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for this comprehensive discussion! I added some comments regarding each option as inline comments.

### Client re-renders

With incremental delivery, where multiple responses are delivered in one request, client code could re-render its UI multiple times in a short period of time. This could degrade performance of the application, negating the performance gains from using `@defer` or `@stream`. There are a few approaches that could be taken to mitigate this. Each of these approaches are orthogonal to one another, i.e. the working group could decide that more than one of these should be included in the spec or be labeled as best practices.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe worth mentioning that the server code also throttles payloads delivering to conform to the response format in addition to the client code in terms of degrading performance.

It could then process each of these before triggering a re-render.

2. __Client side debounce.__ GraphQL clients can debounce the processing of responses before triggering a re-render. For a query that contains `@defer` or `@stream`, the client will wait a predetermined amount of time starting from when a response is received. If any additional responses are received in that time, it can process the results in one batch. This has the downside of adding latency; if no additional responses are receiving in the timeout period, the processing of the initial response is delayed by the length of the debounce timeout.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might worth calling out the actual complexity of finding the predetermined amount of time for debouncing. In practice, we could use logging to analyze historical performance data to come up with a "magic number". However, as query continues to evolve (add/remove fields, add/remove @defer and @stream) and server implementation changes over time, the number may not be optimal anymore.

Developers either need to revisit this number from time to time, or write a process/job that runs periodically and automatically finds a new number. Both of which are of pretty significant complexity.


2. __Client side debounce.__ GraphQL clients can debounce the processing of responses before triggering a re-render. For a query that contains `@defer` or `@stream`, the client will wait a predetermined amount of time starting from when a response is received. If any additional responses are received in that time, it can process the results in one batch. This has the downside of adding latency; if no additional responses are receiving in the timeout period, the processing of the initial response is delayed by the length of the debounce timeout.
3. __Server sends batched responses.__ This approach changes the spec to allow GraphQL to return either the current GraphQL Response map, or a list of GraphQL Response maps. This gives the server the flexibility to determine when it is beneficial to group incremental responses together. If several responses are ready at the same time, the server can deliver them together. The server may also have knowledge of how long resolvers will take to resolve and could choose to debounce.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My intuition is that this would be the direction we want to pursue precisely for the flexibility you mention here. When experimenting at Facebook, we tried client side debouncing first, and due to the complexity of figuring out a reasonable interval, we ended up with a naive debouncing which is always rendering twice, once for the initial payload and once after all incremental payloads have arrived.
That turns out to be not granular enough, so we implemented server side batching. One complication is that the GraphQL executor actually doesn't have a good sense of how expensive a resolver is. Therefore, it's hard for the executor to make a decision on the fly while executing a query. It's more applicable if we outsource the decision of how to batch to the underlying @stream and @defer resolvers. In the end, we only implemented batching for @stream since a global batching at the query level means @deferred and @streamed resolvers need to communicate with each other in some sense.

I wonder if we can offer practical suggestions for servers to implement server side batching. If not, should we instead propose something like @stream level batching?

@robrichard
Copy link
Contributor Author

@Keweiqu thanks for the comments, I added some more information about debouncing as you suggested.

@IvanGoncharov
Copy link
Member

@robrichard Can you please rebase so I can merge it?

Base automatically changed from master to main February 3, 2021 04:50
@leebyron leebyron added the 📣 RFC document PR creates or changes document inside "rfc" folder label Apr 6, 2021
@leebyron leebyron merged commit 4fd39e0 into graphql:main Apr 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
📣 RFC document PR creates or changes document inside "rfc" folder
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants