We Love Speed 2024

Yesterday, I attended the WeLoveSpeed conference here in Nantes, a conference I’ve been attending for years. It’s one of my favorite conferences because it dives into something I’ve always cared about: web performance. There were two tracks with talks in both French and English, and I tried to attend as many as I could.

Even though I’m not as hands-on with web performance as I was earlier in my career, it still interests me—everything from optimizing network connections to tweaking JavaScript, CSS, and images. Over time, the field has evolved, especially since Google rolled out Core Web Vitals and made them a ranking factor. That pushed web performance into the mainstream, beyond just a tech niche as it was initially.

What I find wonderful about performance is that it’s one area you can always improve, and no one will ever say "This is too fast!". While on other areas like if you push a new feature or a redesign, you'll always have people complaining (subjectively) that they don't like it. Speed seems to be one of the only elements that everybody agrees on.

In the rest of this post, I’ll share recaps of the talks I saw. As a public speaking coach myself, I value effective storytelling and visuals that drive home the message. If I can explain the content of a talk to someone else after attending it, I know it’s stuck with me.

Some talks didn’t teach me as much, either because I was already familiar with the content or because the tips weren’t relevant to my work. None of the talks were bad, but some definitely stood out more than others.

AB Testing

The first talk of the day was by the team at Fasterize, and they tackled a big question: Does improving web performance really lead to more revenue? We've all heard the claim that speeding up your site by 100 milliseconds boosts conversion rates by 1%. For a decade, people have repeatedly cited this statistic, but is it really accurate?

Fasterize investigated this with real-world testing on their own customers. They provide their clients with an A-B test feature: they compare two versions of a site—one optimized for speed and one that isn't. It's important that the tests run at the same time to avoid things like seasonal sales skewing the results. This way, they can measure the actual impact on conversion rates, average order value, revenue, or whatever else is important to you.

An important part of this is understanding correlation vs. causation. It’s like saying ice cream sales and shark attacks both rise in the summer—it doesn’t mean eating ice cream causes shark bites, it just means more people go to the beach in summer.

They also mentioned that it takes a while to get clear results. For instance, if a user begins their shopping on Monday, you initiates the A-B test on Tuesday, and the customer doesn't make any purchases until Thursday, the data becomes muddled between the old and new versions of the site. You need to be patient and wait for those in-progress purchase funnels to end, before you can see any actual results.

Another issue is that people frequently switch devices for the same cart. People may begin their shopping journey on their phone in the morning and conclude it on their desktop in the evening. Because they are technically two different devices, there is a risk they won't be attributed to the same A-B test group, skewing the results. Those cases need to be filtered out of the data.

Some trends take longer to appear than others. For example, Core Web Vitals can be gathered in a matter of weeks, but behavioral impacts (e.g., people adding an item to their cart) and business impacts (e.g., increase in revenue) will require months. It is also pretty useless to compare data between two given days; what is of interest is the overall statistical trend.

They stressed that live testing with real users is the only way to represent an accurate outcome. We can use synthetic tests as a guide, but they cannot accurately represent complex human behaviors. There is no way to predict what real user will do, we can only guess. Real user data gives us a true picture of how performance changes affect revenue.

When asked how to prove performance improvements have an impact without their A-B tool, Fasterize admitted that it’s tricky. But after 10 years of working with customers, they are convinced that improving performance has a positive impact (or, worst-case scenario, a neutral one) on revenue.

One key takeaway was that no single Core Web Vital will guarantee a revenue boost; it’s a mix of factors that work together. CLS is not more important than FID or LCP, per se.

It was a good talk, and it crystallized some thoughts I had on A-B tests with clear and concrete explanations.

Leboncoin

In the next talk, the Leboncoin technical team shared how they handle web performance in their company. What really stood out was that they're not only tech experts, but they also know the business inside out, with 18 years of cumulative experience. This gives them a unique perspective on what metrics really matter for their specific needs.

One big focus was on Core Web Vitals (CWV), which are key indicators of web performance, but they made it clear that not all of them are equally important depending on the context. They explained why certain metrics, like Content Layout Shift (CLS), are a bigger deal for Leboncoin than others because of how their website works.

Ads, which appear within search results or in banner-like frames around the main content, are a major source of revenue for Leboncoin. Ads are crucial for revenue, but they can sometimes cause layout shifts that frustrate users. For example, a user might try to click on a category, but an ad suddenly appears, causing them to accidentally click on it instead. That's why they place a high priority on preventing layout shifts.

Maybe for other businesses and other contexts, LCP (Largest Contentful Paint) would be a more important metric to tackle. It all depends on context. The main point is that, even if all CWV are important in theory, you can probably still vaguely rate them in order of importance based on your specific needs.

Back to the layout shift. They’ve tackled this by pre-assigning heights to ad spaces, so even if the content changes, the layout doesn’t shift. This reduces the chance of accidental clicks. But fixing this led to another problem—ad partners saw a drop in conversion rates, thinking there was something wrong with the site. In reality, it was just fewer users clicking ads by accident! This example shows how tricky it can be to balance user experience with business needs and why it’s essential to educate teams internally.

Their approach goes beyond just tech fixes. They emphasize making these metrics simple to understand for everyone in the company. They’ve created tools and dashboards, visible to all developers, to show the impact of different metrics. They also train developers to interpret these graphs properly. For instance, a spike in 404 errors could not necessarily indicate a bug. Similar increases in 200 responses at the same time simply indicate a general traffic increase. Learning how to read data in context is key.

They also shared how they monitor performance using Lighthouse CI, which integrates directly into GitHub. If a Pull Request significantly reduces the CWVs, it cannot be merged until they fix the drop. While this might seem like overkill for a small drop, it’s much better than discovering an issue after it hits production and causes major problems.

The talk also stressed the importance of shared knowledge and understanding across teams. Educating teams—whether it's developers, marketing, or SEO—is critical. Teams working on ads might have different goals from those optimizing for SEO, but it’s important that they all understand why certain metrics matter and how they affect the business. In that regard, they integrated graphs of CWV directly in the tools the SEO team was using (Search Console), so they wouldn't have to go to another tool (and likely forget about it) and instead have it directly in the tool they are the most used to using.

At the end of the day, the key takeaway is that while the specifics of which CWV matter most will differ from company to company, the methodology remains the same: figure out what’s important to your business, make the data accessible and understandable, and keep everyone aligned. Tech is only part of the solution—knowing your company and ensuring all teams are working toward the same goals is the hardest part.

Tight mode and 2-steps waterfall

Another amazing talk was also probably the funniest of the day, with the speaker progressively dressing up as a clown on stage as the behavior of the browser became more and more erratic. He dove into how browsers load assets like CSS, JavaScript, and images, and how they decide what’s most important to load first.

He introduced the concept of "tight mode." This is when browsers load assets in two phases, tight mode being the first of the two phases. The lack of a standard makes it poorly documented and handled differently by each browser. The reason tight mode exists is that many web servers (Nginx, but especially Node.js) don’t handle multiplexing correctly in HTTP/2. This means they don’t always send assets back in the right order of priority, but instead they are all mixed up. So, to mitigate it, browsers had to come up with their own tricks.

He epxlained a variety of browser handling methods, serving as a guide for this undocumented behavior. For example, Firefox sticks to the official specification, assuming the server does everything right (which it rarely does). Meanwhile, Chrome and Safari have their own ways of empirically guessing what’s important and loading those assets first.

The main principle of "tight mode" is that everything important in the head should be fetched before we attempt to fetch anything from the body. We first download JavaScript from the head (I think with a limit of 2 requests in parallel), and only then proceed to download the body. Chrome takes a more aggressive approach, attempting to load the first five images from the body as well while simultaneously downloading elements from the head. This approach is based on the possibility that one of the primary images might be causing a layout shift and might be among the first five images, so it's better to preemptively download it. Safari, on the other hand, doesn't implement this but has a different behavior when it comes to scripts in the head or marked as async (or something similar; I don't exactly remember the specifics).

The big takeaway? The same webpage will load differently depending on which browser is viewing it. The waterfall you get when loading a site varies so much between browsers that trying to optimize it for one could end up hurting performance in another. The speaker summed it up in a humorous but kind of depressing way—there’s no perfect solution, and every browser does its own thing. Trying to optimize it yourself will probably do more harm than good, so for now... well, it's the way it is.

Despite the complex and somewhat frustrating topic, the speaker made it really entertaining. It was a fantastic talk, both in terms of the content and the delivery.

Font best practices

I also watched a really clever presentation about fonts, playing on the French word police, which means both "font" and "the police." Throughout the talk, the speaker used humorous images with police officers to explain how fonts work on the web, giving lots of real-world examples.

He provided several valuable tips, such as subsetting fonts, which involves eliminating any extraneous characters (glyphs) and loading only the necessary characters for the selected language. He also talked about choosing fallback fonts that are the same size as the main font to avoid layout shifts when the fonts swap. Another smart idea he shared was to use dynamic fonts, so you don't have to load separate files for bold or italic versions.

One more practical tip: load your fonts from the same server as your main site. This avoids extra DNS lookups and SSL handshakes, which can slow things down.

He packed his talk with helpful tools and advice on optimizing fonts, all with a humorous police theme running through it. It was a super informative and specific talk, perfect for anyone wanting to boost font performance. He kept it short but impactful!

Perception of time

The focus of this presentation (slides) was on how our brains subjectively perceive waiting time, whereas most of the other talks were about objectively calculating it through tech tools. The idea is that whether something feels fast or slow is extremely subjective and depends on a lot of external factors, like our age, our sex, our heart rate, and many other things that could be happening at the same time around us.

The speaker aimed to highlight how these factors can influence our perception of time. She used various examples to illustrate this concept. One was about an airport where passengers had to wait 15 minutes for their luggage and complained it was too long. To address this, the airport hired more staff to bring the bags faster, reducing the wait time to 7 minutes. However, passengers still found this too long. Instead of hiring more staff, the airport made the plane land at the opposite end of the airport. Passengers now had to walk 7 minutes to reach the baggage claim, which made the wait seem shorter because they had been busy walking for 7 minutes to get there and it didn't feel like they were actually waiting in line. Complaints dropped after that.

She also talked about the common placement of mirrors in elevators and queuing areas. Mirrors keep people engaged because they allow them to see themselves, which can make the wait feel shorter.

Another point she made was about the effect of heart rate on time perception. A faster heart rate can make time seem to pass more quickly, while a calmer heart rate can make it seem to pass more slowly.

As people age, their perception of time can speed up because they experience fewer new things. For children, each day is filled with new experiences, making time feel longer. For adults, days often feel repetitive, and there are fewer new experiences to create lasting memories. This can make time seem to pass more quickly as one grows older.

She also mentioned we better remember the end of things (rather the beginning or middle). She used a medical procedure as an example, asking study participants to rate their level of pain during the procedure. Two people underwent the procedure—one lasting 10 minutes with a sharp peak of pain, and another lasting 25 minutes with the pain diminishing towards the end. The patient with a longer procedure but less pain remembered the experience more favorably because the last part was less painful and didn't end on a high note of pain.

The speaker related this to web performance, suggesting that even if a site is slow, ensuring that the final steps are quick can leave users with a better impression. This means that even if the initial load is slow, a faster final experience can make the overall perception more positive.

I'm not sure exactly what the main message of the talk was (except that time is relative), but there were so many examples that it was very enjoyable to listen to (also, the storytelling was great).

DevTools deep dive

I aslo attended a talk on how to better use DevTools. The talk didn't rank among my favorites, primarily because it didn't align with my interests. I do not spend enough time in the DevTools on my daily routine to really make effective use of the advice.

I also found that the talk lacked storytelling and instead felt more like a detailed list of various features. As I couldn't find the narrative thread connecting it all together, it felt like a series of disconnected points. Still, he showed how to override HTTP headers, modify HTML with local copies, and even "fake" whole URLs directly from the DevTools, which I was impressed with and would definitely use.

Lighthouse CI

I also attended a talk called "Web Performance Testing." As the title seemed quite generic, I checked the description to understand the focus. It appeared to be about Continuous Integration (CI), which piqued my interest because this is another of my fields of interest. Leboncoin had already mentioned using Lighthouse CI in their workflow, so I wanted to learn more about it.

Unfortunately, I found her talk less engaging. It primarily covered Lighthouse and Lighthouse CI, including a brief explanation of Git and GitHub. She discussed how Lighthouse works, how to set up Lighthouse CI, and how to configure it with YAML or JSON files. While this information is useful, I felt I could have acquired the same details by reading the documentation in 25 minutes rather than attending the talk.

I was hoping for deeper insights on best practices for using Lighthouse CI. I would have been interested in learning more about how to use Lighthouse CI effectively in real-world scenarios, including best practices, pitfalls, and limitations. For instance, what is the ideal number of runs required to obtain stable data? Is it necessary to warm it up? Do you run two versions (one for non-regression, based on the current values, and one for improvement, with slightly more aggressive values)?

I would have appreciated learning about her personal experiences, challenges, and practical advice for optimizing Lighthouse CI. The talk focused too much on installation and basic setup, lacking the in-depth, actionable insights I was looking for.

Conclusion

In conclusion, I had a wonderful day at WeLoveSpeed. Many of the talks taught me something new, and I could also continue chatting with people in the hallways. People I had met previously, new faces, as well as organizers and sponsors. I had a blast, and see you next year!


Tags : #webperf

Want to add something ? Feel free to get in touch on Twitter : @pixelastic