Even though First Contentful Paint (FCP) is not a Core Web Vital, it is still an important metric to measure and improve. The Lighthouse report includes FCP as one of the metrics to measure web performance.
First Contentful Paint (FCP) measures the time from when the user first navigates to the page to when any part of the page’s content is rendered on the screen. Following FCP are other metrics like Largest Contentful Paint (LCP), which is part of the Core Web Vitals. Therefore, improving FCP will also improve LCP.
To provide a good user experience, sites should strive to have a First Contentful Paint of 1.8
seconds or less. To ensure you’re hitting this target for most of your users, a good threshold to measure is the 75th percentile of page loads, segmented across mobile and desktop devices.
As a quick reminder, the first contentful paint is the time from when the page starts loading until the user sees an indication that the page is going to load. Another way of saying that is we need to respond quickly.
So how do we do that? How do we make sure that we’re responding quickly? Well, let’s conceptually think about our website again.
All websites inherently have:
There are a few different parts of this process that can contribute to slowing down the first contentful paint.
Your servers need to be quick. You need to deliver small documents that can be sent efficiently, and the number of hops through the network needs to be as short as possible.
So, how do we make sure your server is quick? We need to ensure that your server is sized correctly for what you’re doing. This is the first step in improving it by focusing on your servers.
The specific changes to your servers will depend on what you’re doing and what technology stack you’re using. But essentially, you need to focus on three things:
There are two major things to consider here: the size of our content and how we compress it.
Content size: How do you deliver as small a payload as possible while still getting the effectiveness you need? This will depend on your application, but if you’re delivering an HTML page, a JavaScript file, or an image, there are certain upper limits on how much you should send. For an HTML document, if you’re sending anything more than 80 or 100k in total size, that’s too much. An image might be 1mb at its upper limit. If you’re sending larger files, you’re sending too much content to be consumed efficiently.
Compression: Even if you’re sending a 100k HTML document, how you compress that document over the wire can greatly improve speed. Most platforms support Gzip compression, and newer web platforms support more advanced compression such as Brotli. The specific compression method will depend on your technology stack, but compressing your documents can greatly reduce the number of bytes sent over the wire.
See the following example of how much you can save by compressing your content. I used curl with the —compressed flag to download the HTML of a website. The original size is 700kb, and the compressed size is 200kb.
curl -o /dev/null -s -w "File size: %{size_download} bytes\n" https://jira.trungk18.com/main.js
File size: 756498 bytes
curl -H "Accept-Encoding: gzip" -o /dev/null -s -w "File size: %{size_download} bytes\n" https://jira.trungk18.com/main.js
File size: 212983 bytes
curl -H "Accept-Encoding: br" -o /dev/null -s -w "File size: %{size_download} bytes\n" https://jira.trungk18.com/main.js
File size: 215579 bytes
If we take a look at the network, there’s more than one thing happening. There’s where our servers are in your data center, Amazon, Microsoft, Digital Ocean, or wherever you host your content. And there’s a series of network hops in their infrastructure, which you largely don’t control.
On the other side, you have your users, connected to their own ISP or wireless network, with a series of hops to manage that network.
Between them is the infrastructure of the Internet, and this time is something we can control based on where we place our servers and how far they are from our users.
For example, if your servers are on the East Coast of the United States and your user is on the West Coast, the minimum time is 200 milliseconds due to the speed of light and network hardware. Reducing this distance can greatly improve performance.
So how do you reduce this distance? The most effective way is to use CDNs.
CDNs take the content and place a copy at the edge of each user network, so they can grab it without crossing the Internet. Most CDNs, like CloudFlare and Akamai, do this.
When a user makes a request, the CDN will pick it up, call your server if they don’t already have a copy, cache the response, and serve it to every user who asks for it.
Essentially, we’re doing fewer things by not processing each request across the entire network. This is largely about putting infrastructure in place.
Putting infrastructure in place is very hard to demo and depends on your infrastructure. For example, compression is mostly supported out of the box on most platforms. Some of my web applications are deployed to Netlify and have Content-Encoding: zstd
in the response headers by default without me doing anything.
To improve First Contentful Paint (FCP), you need to focus on the following: