As developers, we are often faced with decisions that will affect the entire architecture of our applications. One of the core decisions web developers must make is where to implement logic and rendering in their application. This can be difficult, since there are a number of different ways to build a website.
Our understanding of this space is informed by our work in Chrome talking to large sites over the past few years. Broadly speaking, we would encourage developers to consider server-side rendering or static rendering over a full rehydration approach.
In order to better understand the architectures we're choosing from when we make this decision, we need to have a solid understanding of each approach and consistent terminology to use when speaking about them. The differences between these approaches help illustrate the trade-offs of rendering on the web through the lens of performance.
- Server-side rendering (SSR): rendering a client-side or universal app to HTML on the server.
- Prerendering: running a client-side application at build time to capture its initial state as static HTML.
- Time to First Byte (TTFB): seen as the time between clicking a link and the first bit of content coming in.
- First Contentful Paint (FCP): the time when requested content (article body, etc) becomes visible.
- Interaction to Next Paint (INP): seen as a representative metric that assesses whether a page is responding consistently fast to user inputs.
- Total Blocking Time (TBT): A proxy metric for INP, which calculates the amount of time the main thread was blocked during page load.
Server-side rendering #
Server-side rendering generates the full HTML for a page on the server in response to navigation. This avoids additional round-trips for data fetching and templating on the client, since it's handled before the browser gets a response.
Whether server-side rendering is enough for your application largely depends on what type of experience you are building. There is a long-standing debate over the correct applications of server-side rendering versus client-side rendering, but it's important to remember that you can opt to use server-side rendering for some pages and not others. Some sites have adopted hybrid rendering techniques with success. Netflix server-renders its relatively static landing pages, while prefetching the JS for interaction-heavy pages, giving these heavier client-rendered pages a better chance of loading quickly.
Many modern frameworks, libraries and architectures make it possible to render the same application on both the client and the server. These techniques can be used for server-side rendering. However, it's important to note that architectures where rendering happens both on the server and on the client are their own class of solution with very different performance characteristics and tradeoffs. React users can use server DOM APIs or solutions built atop them like Next.js for server-side rendering. Vue users can look at Vue's server-side rendering guide or Nuxt. Angular has Universal. Most popular solutions employ some form of hydration though, so be aware of the approach in use before selecting a tool.
Static rendering #
Static rendering happens at build-time. This apporach offers a fast FCP, and also a lower TBT and INP—assuming the amount of client-side JS is limited. Unlike server-side rendering, it also manages to achieve a consistently fast TTFB, since the HTML for a page doesn't have to be dynamically generated on the server. Generally, static rendering means producing a separate HTML file for each URL ahead of time. With HTML responses generated in advance, static renders can be deployed to multiple CDNs to take advantage of edge caching.
Solutions for static rendering come in all shapes and sizes. Tools like Gatsby are designed to make developers feel like their application is being rendered dynamically rather than generated as a build step. Static site generation tools such as 11ty, Jekyll, and Metalsmith embrace their static nature, providing a more template-driven approach.
One of the downsides to static rendering is that individual HTML files must be generated for every possible URL. This can be challenging or even infeasible when you can't predict what those URLs will be ahead of time, or for sites with a large number of unique pages.
Server-side rendering versus static rendering #
renderToString() can be slow as it's synchronous and single-threaded. Newer React server DOM APIs supporting streaming, which can get the initial part of an HTML response to the browser sooner while the rest of it is still being generated on the server.
Getting server-side rendering "right" can involve finding or building a solution for component caching, managing memory consumption, applying memoization techniques, and other concerns. You're generally processing/rebuilding the same application multiple times—once on the client and once on the server. Just because server-side rendering can make something show up sooner doesn't suddenly mean you have less work to do—if you have a lot of work on the client after a server-generated HTML response arrives on the client, this can still lead to higher TBT and INP for your website.
Server-side rendering produces HTML on-demand for each URL, but can be slower than just serving static rendered content. If you can put in the additional leg-work, server-side rendering plus HTML caching can significantly reduce server render time. The upside to server-side rendering is the ability to pull more "live" data and respond to a more complete set of requests than is possible with static rendering. Pages requiring personalization are a concrete example of the type of request that would not work well with static rendering.
Client-side rendering #
<link rel=preload>, which gets the parser working for you sooner. Patterns like PRPL are also worth evaluating in order to ensure initial and subsequent navigations feel instant.
For folks building single page applications, identifying core parts of the user interface shared by most pages means you can apply the application shell caching technique. Combined with service workers, this can dramatically improve perceived performance on repeat visits, as the application shell HTML and its dependencies can be loaded from
CacheStorage very quickly.
Combining server-side rendering and client-side rendering via rehydration #
The primary downside of server-side rendering with rehydration is that it can have a significant negative impact on TBT and INP, even if it improves FCP. Server-side rendered pages can deceptively appear to be loaded and interactive, but can't actually respond to input until the client-side scripts for components are executed and event handlers have been attached. This can take seconds or even minutes on mobile.
Perhaps you've experienced this yourself—for a period of time after it looks like a page has loaded, clicking or tapping does nothing. This quickly becoming frustrating, as the user is left to wonder why nothing is happening when they try to interact with the page.
A rehydration problem: one app for the price of two #
As you can see, the server is returning a description of the application's UI in response to a navigation request, but it's also returning the source data used to compose that UI, and a complete copy of the UI's implementation which then boots up on the client. Only after
bundle.js has finished loading and executing does this UI become interactive.
Performance metrics collected from real websites using server-side rendering and rehydration indicate its use should be discouraged. Ultimately, the reason comes down to the user experience: it's extremely easy to end up leaving users in an "uncanny valley", where interactivity feels absent even though the page appears to be ready.
There's hope for server-side rendering with rehydration, though. In the short term, only using server-side rendering for highly cacheable content can reduce TTFB, producing similar results to prerendering. Rehydrating incrementally, progressively, or partially may be the key to making this technique more viable in the future.
Streaming server-side rendering and progressive rehydration #
Server-side rendering has had a number of developments over the last few years.
Streaming server-side rendering allows you to send HTML in chunks that the browser can progressively render as it's received. This can result in a fast FCP, as markup arrives to users faster. In React, streams being asynchronous in [
renderToPipeableStream()]—compared to synchronous
renderToString()—means backpressure is handled well.
Progressive rehydration can also help avoid one of the most common server-side rendering rehydration pitfalls, where a server-rendered DOM tree gets destroyed and then immediately rebuilt—most often because the initial synchronous client-side render required data that wasn't quite ready, perhaps awaiting resolution of a
Partial rehydration #
The partial hydration approach comes with its own issues and compromises. It poses some interesting challenges for caching, and client-side navigation means we can't assume server-rendered HTML for inert parts of the application will be available without a full page load.
Trisomorphic rendering #
If service workers are an option for you, "trisomorphic" rendering may also be of interest. It's a technique where you can use streaming server-side rendering for initial/non-JS navigations, and then have your service worker take on rendering of HTML for navigations after it has been installed. This can keep cached components and templates up to date and enables SPA-style navigations for rendering new views in the same session. This approach works best when you can share the same templating and routing code between the server, client page, and service worker.
SEO considerations #
Wrapping up #
Thanks to everyone for their reviews and inspiration:
Jeffrey Posnick, Houssein Djirdeh, Shubhie Panicker, Chris Harrelson, and Sebastian Markbåge