How long do you wait for a page to load? Not much time, I guess.
You are not the only one who doesn’t like to wait a long time until a page loads. In fact, page loading performance is one of the most important factors for a good user experience on the web.
As Jakob Nielsen says in this article:
A snappy user experience beats a glamorous one, for the simple reason that people engage more with a site when they can move freely and focus on the content instead of on their endless wait.
Page loading performance is so important that google now uses it as a ranking factor in search results. In other words, your SEO now depends on your page performance too.
Because of this, many developers are now trying to optimize their website metrics using google’s PageSpeed Insights tool to get the best score possible.
In this post I’ll discuss everything you need to know to optimize page performance and get a perfect score on PageSpeed. First I’ll present the main factors that affect page loading speed, then, the metrics that are used to measure page performance, the tools used to measure them and finally discuss some techniques to improve page performance.
Before optimizing web pages we need to understand the process of rendering pages to the screen.
My previous article presented an overview of how HTML and CSS files are parsed and combined to render a page on the screen. Let’s briefly review this process considering the performance related aspects of it.
The process of rendering a web page in the browser starts by downloading the HTML file from a server. As soon as the browser receives the first bytes of the HTML it starts to parse it progressively to construct the DOM tree.
During the DOM parsing process the parser can find a link tag to a CSS file. When this tag is found, the parser immediately dispatches a request to download the CSS file and continues to parse the HTML. The CSS download occurs in parallel with HTML parsing.
Now imagine that the parser finished constructing the DOM but is still downloading CSS files. In this situation the browser can’t render anything until all CSS is downloaded and parsed otherwise a page with no styling will be displayed to the user. Worse than that, the unstyled page would be displayed and updated as soon as all CSS is downloaded and parsed, causing what is called a flash of unstyled content (FOUC).
To avoid causing a FOUC, the browser has to wait for all CSS to be downloaded and parsed before displaying the page. For this reason we say that CSS is a render blocking resource.
During the HTML parsing the browser can also encounter script
tags. Like CSS, the browser dispatches a request to download the javascript file immediately but, instead of continuing the HTML parsing, it pauses until the javascript is downloaded and executed.
The entire DOM construction process has to be paused because javascript can manipulate the DOM by adding or removing elements to it, thus altering the structure of the original DOM parsed from the HTML. For this reason javascript is considered a parser blocking resource.
Another important property from javascript is that it can also manipulate element styles. When any style manipulation call happens from javascript, the browser has to verify that all known CSS has been downloaded and parsed before allowing the script to execute the style reading or manipulation. If the CSS is not ready, the script execution will be paused until the CSS parsing is done to ensure that the script has the most up-to-date style information for the page.
Quick recap:
The main goal of website optimization is to improve the performance perceived by the users. The best way to accomplish this is by displaying the first useful bits of information (above the fold content) for users as fast as possible.
Knowing how CSS and Javascript resources affect the process of HTML parsing we can now start to optimize our pages to avoid any render blocking and display the content above the fold faster. Let’s explore some strategies that can be used to improve the page performance.
The most obvious, but often difficult to implement strategy to improve website performance, is to remove unused content. This includes unused HTML, unused CSS and unused Javascript.
The problem of unused content is that the HTML and CSS parsers will have to do additional work to parse the unused content even though they won’t be painted. As for Javascript the problem is the same. The interpreter will still have to parse script code that won’t be executed.
By removing unused resources you can decrease the script, css and html size, thus improving the resource download time as well as requiring less work from CSS/HTML parsers and from the Javascript interpreter to render the page.
This strategy consists in splitting CSS files into critical and non-critical and embedding the critical CSS in the page HTML files.
The critical CSS consists of rules needed to render above the fold content while the non-critical CSS consists of rules used only by parts of the page that are not displayed immediately, like components rendered below the fold, modals and hidden menus.
After identifying all the critical rules, a style
tag can be used to inline these rules directly into the HTML file. This will void a second network round trip by the browser to fetch the critical CSS. As an effect, the page rendering process won’t be blocked by CSS download and the above the fold can be rendered much faster.
Since the non critical CSS is not required on the initial page load, it can safely be loaded after the above the initial fold content is rendered.
To properly apply this strategy, a small javascript is required. The reason being that, as discussed before, any link
tag to a CSS resource will cause the browser to pause the render process until all the CSS is downloaded and parsed.
The following javascript snippet can be used to load CSS files after the initial page load:
window.onload = () => {
const link = document.createElement('link');
link.setAttribute('rel', 'stylesheet');
link.setAttribute('href', 'path/to/noncritical.css');
document.head.appendChild(link);
}
Basically it executes a script to insert the non critical css in the DOM and make the browser download, parse and apply the styles defined in the file. Note that this script will only be executed on the window onload
event. This is required to make sure that the initial page has been completely rendered before the non critical CSS is inserted, avoiding this new CSS to block the initial render.
The script
tag supports two attributes that are useful to load scripts without blocking the HTML parser. These attributes are async
and defer
.
Both attributes can be used to tell the browser to load script files in background while the DOM is constructed. The main difference between these two attributes is the moment the downloaded script will be executed.
Scripts loaded with defer
are executed only after the HTML parsing is done. For this reason, defer
scripts won’t block parsing.
Async
scripts, on the other hand, are executed as soon as they are available. If the browser finishes downloading an async
script while the HTML is being parsed, it will pause the parsing to execute the script.
For this reason defer
should be preferred over async
to load javascript.
The resource preloading allows the browser to dispatch a request to download a resource that will be used soon. This way, the resource is already available when the browser needs it.
Resource preloading works with any kind of resource like CSS, javascript, images and even json fetched from an API.
The following tag is used to preload resources. Note that you should specify the kind of resource you’re loading in the as
attribute:
<link rel="preload" href="style.css" as="style">
The preloading is particularly useful to tell the browser to download the non critical CSS before it is actually needed. That speeds up the non critical CSS processing without impacting the critical part of the page rendering.
After tackling all the basic page render blocking strategies presented above, some additional work can be done to ensure that critical resources are loaded as fast as possible.
One of them is by using HTTP compression. All current browsers support gzip
and brotli
compression. It’s also easy to find tools to allow your server to compress files before sending them to the browser.
The idea is simple. The faster a server can deliver the content to the browser, the sooner the browser can start parsing and rendering it.
However this is highly dependent on the server infrastructure and tools used to build the site. Some ideas to improve the server speed includes optimizing SQL queries, finding and fixing slow code and using caching when possible.
It is essential for any performance improvement to have a set of standard metrics to measure if the optimizations are returning a positive effect. Let’s take a look at the current three most important performance metrics for web performance.
The largest contentful paint metric represents the time between the initial request to load a page and the paint of the largest page element above the fold. The largest element can be an image, a video poster, a background defined in css with url()
or a block of text.
It is very common to have an image as the page LCP element. If that’s the case, the resource preloading can be used to start loading it sooner. Also, it is important to resize all images to a dimension closer to the final displayed size to decrease the file size. Image metadata removal can be helpful too.
This metric is part of the core web vitals and has an impact on SEO ranking. A measured time of 2.5 seconds or less is considered a good value.
The first input delay metric represents the time between the first user interaction with a site and the time the browser responds to the interaction.
The main factor that impacts on FID is the time taken by javascript to setup all interaction events for the page. For this reason it’s important to execute the minimum possible javascript code at the page load to avoid blocking the setup of input events.
This metric is also part of the core web vitals and impacts your SEO ranking. A measured time of 100ms or less is considered a good value.
The time to first byte measures the time between the initial request to load a page and the first byte received as a response by the browser. It is a good metric to measure the server response time.
This metric is not part of the core web vitals and doesn’t impact SEO ranking directly. But a good TTFB will help to get a good score on all other performance metrics.
PageSpeed is the main tool to measure website performance. It uses lighthouse to measure and report many metrics and, based on the results, presents a few insights.
It is important to note that the score given by this tool is not what is used by google as a ranking factor. Google, instead, uses data collected from real users (called “field data”). The PageSpeed report can display the field data collected by Google depending on the site’s size and relevance.
Another way to get a performance report is through the lighthouse tab on chrome developer tools. It can be accessed by right clicking on the page then selecting “Inspect” and navigating to the “Lighthouse” tab.
The main advantage of this tool is that it can be used to measure the performance of sites that are still in development and running on environments not accessible from the internet.
PageSpeed and Lighthouse are good tools to get an overview of a website’s performance. But the only way to know the real performance is by measuring it in the field.
Google provides a javascript library to measure LCP, FID, TTFB and a few other metrics directly from the user’s perspective. This script can be used to collect all the necessary data and send them to a server for processing and reporting.
It is particularly important to measure the web vitals metrics from real users since this is what is used as a ranking factor by google.
Website optimization is a complex task. I hope this guide can help you to optimize your website, get a good score on pagespeed and improve the SEO ranking of your pages.