Description
@stream and @defer are collaborative efforts between the client and server to improve the application’s time-to-interact metric. This issue discusses some important reasons to enable the server to choose whether to deliver data asynchronously.
Eagerly fulfill @defer and @stream
In the case that a piece of less important data happens to be expensive, @defer and @stream enables the application to improve the TTI without making the expensive data slower. However, these two criteria are not always aligned.
If the deferred data are cheap and fast to retrieve & if the important (non-deferred) data is the bottleneck: the server may decide to include this data in the initial payload as the data is already ready. In this situation unnecessarily delaying delivery can negatively affect the user experience by triggering multiple renders.
(Caveats: although eagerly fulfill @defer and @stream may be considered “free” from the server side, there is a client side cost of processing more data for the initial payload in terms of JSON deserialize. There are potential optimizations through braces matching that seeks to the deferred data and skip the entire subtree.)
Fast Experimentation and Iteration
Often times, it is hard to tell the true cost of a piece of data upfront:
- Shared memoization may make certain fields appear cheaper.
- Per field level attribution is hard to achieve as most GraphQL resolvers are async in practice.
Therefore, when using @defer and @stream it’s useful to set up an experiment and compare the result. If a client has a long release cycle, turning @defer and @stream on and off on the server side is much faster.
Additional benefits of server side configuration:
- Peak time versus off-peak time: the peak and off-peak optimization is also described in @stream and @defer: Asynchrony, laziness, or both? #691 @stream and @defer: Asynchronous, lazy or both?
- Personalized adaptive behavior: Further optimizations may be achieved through personalized configuration with the help of ML prediction. As an example, @defer and @stream in the context of prefetch comes at certain cost when the user didn’t go to the prefetched surface. With personalized ML prediction, the server can decide whether to respect @defer and @stream directives based on the likelihood of the person visiting the surface.
Proposed standardization:
For the list reasons above, it is essential to allow server side control of whether to respect @defer and @stream. Therefore, we’d like to propose to specify the requirement in the spec that:
- Client side should be able to properly handle eagerly fulfilled @defer and @stream payloads.
- An optional boolean argument “if” to control whether to turn on and off @defer and @stream. While it’s not strictly required to enable server side control, this argument expresses control as a configuration mapped to a variable. This turns out to be a convenient abstraction for experimentation and adaptive configuration.