-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
to stream or not to stream, that is the questions #14
Comments
Which objects specifically? The
Sounds right to me, but isn't that why we want to use Piscina? To cap the number of simultaneous requests per event loop? With piscina in place, good TTFBs become achievable again I think. |
Some time has passed and I do not recall exactly which objects.. likely React.Elements.
More data is generated which is not collected in time.. and it ends up being promoted to old space. |
@mcollina do you still feel the same way for this? I have seen a few more React SSR frameworks fly by that support streaming for the same reasons I mentioned above which is a way better TTFB and user experience, especially with things like Suspense and Server Components coming down the pipes that allow you to send off more data to the client later after an initial render has been sent. |
This consideration still holds:
The game changers are Server Components: it will significantly improve TTFB. At least we can trade some scalability for improved TTFB. If we add a circuit breaker on top (like under-pressure), this will be a really good combo. |
Continuing from: piscinajs/piscina#108 (comment)
The above is not correct. The problem is that those objects lives longer, therefore they have a higher chance of getting moved to old space. This happens a lot. In most cases of synchronous
renderToString()
, everything gets collected before receiving any other data.We are not in agreement. From my experience most of the React SSR lowers the end user experience under high pressure as the CPUs are gets extremely busy and the event loop overloads. A renderToString with a 100ms event loop block would easily become a 150-200ms event loop block using streams, e.g from to 10 to 5 req/s. Simply put, it increases the chances of receiving more requests than the server can handle. I'm basically worried about the 99.9% latency percentile.
The text was updated successfully, but these errors were encountered: