This guide is closer to book length than article length (29,000+ words). Why so long? Because I wanted to cover everything you need to know about page speed optimization when working with a WordPress site. And, there is a LOT to know. My goal is that this one long guide will be all you will need. So, you may want to bookmark this and read it in chunks. How did I do? Let me know in the comments.
Introduction
You don’t want visitors to your site to become frustrated because it takes too long to load. You also want to please Google and the other search engines; having a fast, responsive, and optimized WordPress site can help in that regard. There are many tools that can help and I will discuss many of them. Special emphasis will be given to Google’s PageSpeed Insights (PSI).
When I switched this site to WordPress I had to learn a lot about the platform. I also had to choose a theme, find useful plugins, and learn how to optimize WordPress for speed and performance. Years ago I attempted to optimize my site with W3TC, but although helpful I never got stellar results. That is because I focused on only one tool without understanding all the factors involved in optimizing a WordPress site. As I learned more about how WordPress works, I decided that scoring a perfect 100 on Google PageSpeed Insights would be an interesting challenge. Well, I did it and now I want to share what I learned with you.
If you are reading this, you have probably heard about and maybe already used Google’s PageSpeed Insights tool. If so, you are probably frustrated with warnings given and are looking for ways to get rid of those and increase your score.
Regardless of your knowledge level, this guide will help you understand what’s involved in optimizing a WordPress site’s speed and performance. I will also show you ways to improve your site enough to ace Google’s demanding test.
In This Article
What is the Goal?
Before starting, I think we should stop and ask ourselves a fundamental question: what exactly are we trying to accomplish? Is the goal simply to please Google and hopefully reap SEO rewards for doing so? Do you have a personal mission to score a perfect PageSpeed Insights result at all costs? Is the goal to offer the very best user experience possible? It is likely a combination of all these things. A great PageSpeed Insights score should do the job, but sometimes we must pit performance against usability and functionality. Knowing where and how to draw the line for these trade-offs will depend on your (and your audience’s) preferences.
Can You Really Score 100 on Google’s PageSpeed Insights Test?
If you read articles on the topic, you notice that not many people actually managed a perfect PageSpeed Insights score. Some even claim it is impossible. Well, it is possible and I did it with three sites.
Of course, you should be skeptical of my claims, so here are a few screenshots (from an older version of PSI) as way of proof. My Dado Que website:
My MBA Boost website:
This Lengthy Travel website:
I achieved the perfect scores without any special server setup or technical wizardry. WordPress-optimized hosting companies do exist but I was not using one when I first wrote this guide.
A Content Delivery Network (CDN) can be very helpful and I use a free one on this site but not on the other two sites. Speaking of free, all of the optimization changes I use are completely free.
Finally, lest you think I started with a high score and didn’t have far to go, that is not true. My initial PageSpeed Insights scores ranged from 58-59 for mobile and 54-74 for desktop.
Beware What Your Read Online
This is probably not the first article on the topic you have read. While reading experiences and tips from others is a good thing, I advise you to be wary of what your read. Here’s why.
First, each site is unique so it is quite hard to get an apples to apples comparison or prescription. Think a plugin sounds great? You will find people who rave about it and people who claim it killed their site.
Second, many articles fail to give sufficient details about how and what they tested. Take load time as an example. Which test did they use and from what geographical locations? In other words, where are the host server and the testing server located? Did they list an averaged time or a single (best) measure? What plugins or other tools did they use? What day of the week and what time of the day? Which page did they test and what content did they include or exclude? If they used a plugin, what settings did they use? There aren’t right or wrong answers to these questions. But, without knowing them you are not getting a complete picture and it is difficult to compare to the specific situation you are facing or replicate the suggestions offered.
Third, what results are they showing? I recently read a well-written account of someone’s PageSpeed Insights efforts that only showed the desktop result. In the comments someone asked about it and learned that mobile actually scored quite poorly.
Finally, what tools and techniques did they test? There are many to choose from and it is almost impossible to test them all. It is unwise to say a plugin is the best solution if you haven’t compared it to alternatives. Yet, many folks do exactly that.
None of the above means to imply that you shouldn’t read other articles about optimization. They often are instructive and, while not every tool is tested and compared, after enough reading you have a good idea of which tools to try.
Testing Tools
“Certain people fall prey to the idea that pages with HIGH Page Speed scores should be faster than pages with LOW scores. There are probably various causes to this misconception, from the name—to how certain individuals or companies sell products or services.” – Catchpoint Systems
Besides knowing which optimization tools to test, we need to know which tests to use. This guide is focused on Google’s PageSpeed Insights test and may be the main reason you are reading this. But, other useful tests exist with different features and benefits and I wil list my favorites.
Different Types of Tests
Testing your WordPress site involves four main types of tests.
First, speed benchmarking services test the actual speed or loading time of your site. Sometimes you can specify the geographical location and sometimes get averages over various attempts and distances. Because of these differences, results will often vary—sometimes significantly—when using multiple tests.
The second type of test provides a grade score (either numeric or alphabetical). These tests generally measure the quality and/or efficiency of the design and coding of your site. This is important because there is a correlation between that score and the actual speed of your site. Still, having a better grade than another site does not guarantee that your site is actually faster and the discrepancy can often be quite large. So, while grading tools are useful for giving you an idea of what you can do to get lower loading times, by themselves they are rarely sufficient to tell you about actual speed performance.
Of course, some tests cover speed and performance. Helpful features to look for include: number of requests and page size; archived results (so you can go back and look at older results as a baseline); a waterfall showing how long each individual request on the page took; a display of file sizes, load times and other details about every element of a page; the ability to assess the performance of your website from different locations in the world; resource diagrams; a breakdown of your CSS and JS usage (inline vs. external, size); and a measure of how your site compares with the countless others previously tested.
The third type of test—a load test—attempts to evaluate how your site will respond under heavy use. It typically does this by having one or more computers in one or more locations emulate multiple users trying to use your site over limited periods of time. For these tests, the quality and nature of your hosting provider will often be more important than your specific WordPress site configuration. Load testing services are usually not free. They are important but, unfortunately, beyond the scope of this article. Still, if you are interested in using them, some popular considerations include blitz.io, loader.io, Loadstorm, Siege and ApacheBench (ab).
Finally, there are specialty tests that focus on specific elements that affect performance and there are tools you can use from your computer or browser. I will list useful specialty tests after I list the popular primary tests.
PageSpeed Insights and Other Google Tests
PageSpeed Insights (PSI)
PSI is a test that you cannot ignore and its results are often an eye opener. PSI results also offer a great starting point if you are just beginning to optimize your site. It is also the only test I am aware of that provides separate mobile and desktop performance scores and tips. This is especially useful because functionality and usability are often different for the two and therefore so is—potentially—performance.
In 2018 Google made big changes to PageSpeed Insights. Specifically, it switched to using Lighthouse for it’s lab data. Lighthouse simulates a page load on mobile networks with mid-tier devices. Thus, its results can differ significantly from older versions of PSI. In addition to performance results that PSI always provided, Lighthouse adds tests for accessibility, progressive web apps, SEO, and best practices.
Google’s Measure tool
PageSpeed Insights only shows Lighthouse’s performance assessment of your site. You can, however, use Google’s Measure tool to see the rest of the Lighthouse assessments.
Chrome User Experience Report (CrUX)
While lab data is useful, PageSpeed Insights also integrates field data from the Chrome User Experience Report (CrUX). CrUX is based on a set of historical stats about how a specific page has performed in the real world. It uses anonymized performance data from real users who visited the page from different devices and network conditions. These CrUX results are usually only available for popular websites, however.
Lighthouse in Chrome
In addition to powering PageSpeed Insights, you can also run Lighthouse directly from the Chrome browser. Click the three dots at the top right, choose the “More tools” option and then “Developer tools.” Next, click on the “Audits” option at the top and you’ll see the lighthouse logo with a “Generate report” button. Alternatively, use the Lighthouse Chrome Extension.
PageSpeed Insights Chrome Extension
The Google PageSpeed Insights Extension for Chrome is great for several reasons. First, you can use it to test a development site, especially one not accessible on the Internet (check out my tips for creating a development site with XAMPP). Second, you don’t have to visit the PSI website, which can be slow and requires 30 seconds between tests. It also makes testing many pages much easier than doing so via the website. Finally, it produces a nice layout of results with detailed breakdowns of all the issues covered.
Google Mobile-Friendly Test
Mobile-Friendly Test analyzes exactly how mobile-friendly your site is, and focuses on elements beyond speed as well. Google’s Search Console also offers a mobile usability report.
Other Popular Page Speed Tests
I think most of us tend to work through a progression on the page speed optimization learning curve. The best way to do that is to get into the result details provided by the alternatives to PageSpeed Insights. Naturally, there is a lot of overlap but each of the tools below provides at least one test or feature that distinguishes it from the rest. If you are a real page speed warrior, maximizing your scores for all of the tests will test your mettle.
GTmetrix
The big selling point for GTmetrix is that it performs two tests: page speed and YSlow. YSlow almost always gives a worse performance score because it tests some things most other tests don’t, especially the use of a CDN. You cannot specify a geographical region, but it will report what region (and browser) it used. I also like the presentation of results for this test site, giving each individual result a score out of 100. Results also get color-coded and sorted to easily spot the biggest problem areas. Click any recommendation and you will get more details and a link to an article explaining the test.
Pingdom
Pingdom is very popular and lets you test from three different geographic locations. It also seems to generate warnings that other tests don’t. Its one-page display of results is nice; each test result gets a color, letter grade and score out of 100. Finaly, it tells you what percentage of sites yours is faster than. One thing I don’t like about it is the lack of detail on any individual warning.
WebPageTest
WebPageTest is another good testing service. One thing that sets it apart is that it tests your site three times instead of just once. You can also choose the geographical location, browser type and even the connection type/speed. Basic results are given a letter grade for six categories, including use of a CDN. Detailed results are also available, but you have to click through to see those on separate pages. FYI, Moz has a useful guide to Web performance using WebPageTest.
Varvy
Varvy offers two testing tools: Page speed optimization and SEO (which also includes a page speed component). I don’t think the page speed test actually provides anything of value that the other tests don’t, but the SEO test (which is the test available on the home page) is good for testing compliance with a lot of Google guidelines (some speed related, some SEO). The site generally provides excellent articles explaining each test.
More Page Speed Test Services
The online testing services I listed above are the most popular and the ones I primarily use. There are others, however, that might be worth considering. Here are the ones I have read good things about but not really used enough to recommend for or against.
DareBoost
DareBoost. In older comments below, Damien from DareBoost recommended his tool. He claims it offers some checks other tools don’t, including a speed index, checking for duplicate scripts, checking performance best practices in your jQuery code, performance timings (not sure what he means by that), SEO, compliance, security, etc. You can do five tests without registration, though it will be necessary for some features. You also have to share via social media to download a PDF of the report generated. Personally, I don’t find the test to add much value in terms of speed issues but the very extensive list of specific SEO, security, accessibility and quality issues definitely makes it worth adding this to your suite of testing tools.
GeekFlare Website Performance Audit
GeekFlare’s Website Performance Audit produces results similar to PageSpeed Insights. They may even be reproducing PSI (Lighthouse) results directly, though I did recognize some tests that seem to be independent. Regardless, they cover more than just speed measurements and the presentation of results is quite good.
PageSpeedGrader
PageSpeedGrader is a pretty simple test that shows you an overall grade out of 100. It also shows more results with only the recommendations for improvement shown on the core results page (there are tabs for successful tests, a timeline, and requests made in order to load the page).
PageSpeed Insights Monitoring
Most of the page speed testing sites are free but also offer paid plans to regularly monitor your site. I’m a bit cheap for that so I wondered if there is some way to do my own regular testing. Read about the three different automated PageSpeed Insights testing solutions I found and/or created.
Server Response Codes and Headers
When you visit a website, you see the content on the page, but your browser sees extra information. This information is found in HTTP header fields. Mostly these are set by your server but sometimes you can set additional ones via an .htaccess
file or with PHP coding but that’s getting into some pretty advanced territory.
Generally, I wouldn’t worry much about changing your site’s headers, but it is instructive to know what they are and there are some useful tools for that, including REDbot, HTTP Status Code Checker, and HTTP / HTTPS Header Check, with REDbot being the most useful. It will point out common problems and suggest improvements. Although it is not a HTTP conformance tester, it can find a number of HTTP-related issues. REDbot interacts with the resource at the provided URL to check for a large number of common HTTP problems, including: invalid syntax in headers, ill-formed messages (e.g., bad chunking, incorrect content-length), incorrect GZIP encoding, and missing headers. Additionally, it will tell how well your resource supports a number of HTTP features.
Other Specialized Tools and Tests
Some other specialized tools can help as you slog your way through your efforts, including:
- Page Weight is a free tool provided by ImgIX to drive sales of its online cloud image optimization service. The results for my limited testing were impressive. All you do is paste a URL into the site and it will analyze the overall weight of the page and what percentage is due to images. If you can effectively compress any of those images, it will indicate that. You can even download a compressed copy of the worst offender.
- The WAVE Web Accessibility Tool will tell you how well people with physical impairments can use your site.
- MX Toolbox tests lots of different things, including DNS and mail server setup, presence on spam blacklists, etc.
- GiftOfSpeed, Check GZIP Compression, GIDZipTest, and REDbot all let you check if you have gzip compression working. REDbot also checks browser caching.
- Geekflare has an awesome collection of free tools you can use to test and troubleshoot things on your website. For example, the TTFB tool is simple, quick, and lets you see how fast (low) your time to first byte is from three locations around the globe.
- last-modified.com tests to see if your server is properly setting the
If-Modified-Since
header. - KeyCDN’s Brotli Test page checks if your site has Brotli compression enabled.
- Learn to love Chrome’s Developer Tools. I already mentioned how useful it is for finding potential JavaScript problems. The Inspect Element feature (which is also available in Firefox) is great for working on CSS issues. There is also a Network section that will show you where your site is using time/resources. This can help point to possible ways you can improve. And, if you use the recommended PageSpeed Insights or Lighthouse extensions, you accessed them from a Developer Tools menu.
- The HTTP/2 and SPDY indicator extension for Chrome shows a lightning icon in the address bar. It’s blue if the page supports HTTP/2, green if the page supports SPDY, and gray if neither is available. If enabled, you can click on the icon to get a variety of detailed information about the connections.
- Varvy’s JavaScript usage tool examines how a page uses JavaScript. Its CSS delivery tool gives you an overview of how your page uses CSS.
- HTTP Status Code Checker and HTTP / HTTPS Header Check check status codes and HTTP response headers that the web server returns when requesting a URL.
Testing Thoughts and Tips
Page Speed Testing = Fuzzy
Regardless of which test(s) you decide to use, you should consider any results as a bit fuzzy. What I mean is that you can run certain tests several times and get slightly different results. This can relate to how busy your server was when the test ran, something specific to the testing server, etc. Don’t worry about it too much. The key is to look for large changes and the appearance or disappearance of specific warnings. Just run each test multiple times if you suspect something strange has happened.
Discrepancy between Speed and Performance
It is possible to have a high performance score and slow site speed and vice versa. This is especially noticeable if you employ a caching solution but don’t take any other steps to improve the performance items generally tested for. Alternatively, you could have a well-optimized site but one with large page size and/or poor server performance. The tools mentioned above that test both speed and performance are useful for seeing this discrepancy if it exists.
Testing Times and Locations
When possible, try both geographically close and distant test locations to see the load time differences for your site. You should also test more than once because multiple tests can show different loading time results. You also might want to try testing at a different time as perhaps the fluctuation is due to server load.
Testing Without Ads
Google AdSense penalizes PageSpeed Insights scores so you will want a way to test your site with it disabled. Fortunately, some ad plugins provide an easy way to not display ads on a per-page basis or based on some filtering criteria. I use and highly recommend Ad Inserter, which offers a variety of filter options to accomplish this. With Ad Inserter installed, there are two ideas worth considering:
- Create a page and/or post to use just for testing purposes. For my sites, I created a page called noads. I used a Lorem Ipsum text filler. A test page is also very helpful if you have multiple websites you are trying to optimize as you can sort of compare apples to apples
- Use the query string filter to block ads on any existing page. So, for example, you can filter with a query string of ‘noads’ (e.g., domain.com/?noads=true). This is a great idea because it allows you to easily test any page on your site. There are two potential downsides, however. First, the presence of query strings is actually a warning on some of the testing sites (especially Pingdom). Second, your caching solution may not support caching URLs with query strings or may have the option disabled. If the latter, enable it, at least while you test.
Testing with Caching
If you are making iterative site changes some page speed test sites may not notice those changes due to caching. For plugin-based caching, you can purge the cache, either just for the page you are testing or for the entire site, depending on your caching plugin. If you have server-based caching, you may have no recourse but to wait until a later time/date to re-test or to test a different page. If you are using Cloudflare, don’t forget to purge its cache and put it into development mode.
PageSpeed Insights Caching
Google caches its PageSpeed Insights results for several minutes. This can become a small gotcha when testing the effectiveness of your optimization changes. Just be sure to wait a few minutes for Google to clear its cache between testing your pre- and post-change site.
Understanding Speed Factors
Before starting with actual optimization work let’s understand at a high level what factors actually affect site speed. The number of factors that come into play is long and this list probably misses some, but here are things to consider:
- Type of server: WordPress optimized, dedicated, VPS, shared, cloud, SSD or disk storage, amount of RAM, etc.
- Type of server OS (linux vs. windows; Nginx vs. Apache vs. LiteSpeed)
- Use of server-based caching and acceleration tools like Memcached, OpCache, nginx caching, and LiteSpeed Cache.
- Use of plugin-based caching like W3 Total Cache, WP Fastest Cache, WP Rocket, and WP Super Cache.
- HTTP vs. HTTPS
- HTTP/2
- Nature of your permalink structure (flat vs deeply nested)
- Amount of dynamic content and/or database (DB) queries required by your site. This is a function of the theme and plugins you use as well as WordPress itself.
- Images: number, size and whether they are optimized or not
- The use of video or other rich media
- Overall size of your pages (HTML, CSS, JS, fonts, images)
- Different types of pages. Your score will vary from page to page for various reasons, including the fact that certain pages use plugins that other pages don’t use, some pages are more image heavy, etc. One big gotcha to beware of involves pages—like e-commerce pages—that you exclude from caching.
- Using (or not) a content delivery network (CDN)
- DNS configuration
- Use of redirects (usually 301, which are legitimate and should not be eliminated without good reason, but can slow down your site)
- Theme issues. The theme you choose will impact on your site performance. If it is well written (uses few or no external scripts, uses only optimized images, minimizes database queries, etc.) you should have no real problems. WordPress makes it easy to add and switch themes, so if you are wondering how yours performs, test it against some others.
One reason WordPress is so popular and so powerful is because it is modular and flexible, thanks to plugins. Unfortunately, the price to this flexibility is often a slower site.
I use almost 50 different plugins on my site! That may be a lot and perhaps I could get by with less, but what impact does having this many plugins actually have? While many will admonish against using too many plugins, they are not per se bad. Where they can cause problems is when they are: (1) poorly coded, (2) reference external scripts (CSS, JS), and (3) make extensive DB requests.
Of course, it is always a good idea when choosing plugins to ask yourself if you really need the functionality it provides and what impact it might possibly have on your site performance. If you add a new plugin after optimizing your site, be sure to re-run your performance tests to see what impact, if any, it had.
There are also ways you can modify your theme’s functions file to remove any negative effect a plugin might have on of your site’s performance. I will illustrate a few of these later but you may also want to read the useful article, “How WordPress Plugins Affect Your Site’s Load Time.”
My biggest recommendation is that you choose your social sharing plugin very carefully. These can be particularly important to your site speed and performance scores.
Note: If a plugin adds design elements to your page, make sure it is responsive for mobile screens since it makes no sense to have a responsive theme that includes non-responsive content added by a plugin.
Web Hosting and PageSpeed Insights
The quality of your hosting setup—notably the server configuration and the traffic load it faces—can significantly impact the Reduce server response times (TTFB) PageSpeed Insights audit.
I have previously achieved a perfect PageSpeed Insights score with a shared server running Apache but that is not easy to do so. Since I first wrote this guide I have moved my sites to new hosting providers. I am still using shared servers, but my new providers have faster operating systems, better server-level caching, and more strict resource limits on accounts (making for a “safer” server neighborhood).
My sites still fail the TTFB test frequently (it’s hit or miss), which is annoying. I am mostly happy with both my current providers, SiteGround and TMDHosting, and recommend either of them. I am thinking of switching one or both of them since their renewal is due soon and I am curious to test out some competitors. If I do, I will update the status of their performance, especially in terms of TTFB.
Social Sharing Plugins Hurt Your PageSpeed Insights Score
Frank Goossen discussed how Social sharing can significantly slow down your website in a post back in 2012. I am sure the various plugins he considered have changed since then and many more were not tested, but the concept remains true today, which is that social sharing widgets slow webpage loading and rendering down as they usually include third party tracking for behavioral marketing purposes. Ultimately, you have to decide how important having social sharing buttons is to you.
Assuming you want social sharing, there are options that won’t kill your page speed, but they likely won’t look the best or have all the bells and whistles. For what it’s worth, I have long used Simple Share Buttons Adder. I don’t recommend it since it was acquired by ShareThis and they added a lot of tracking and other scripts. Now it is incredibly bloated (read the countless bad reviews since the acquisition). I dequeue all those annoying new scripts so I am still using an older version of it, but you should probably look elsewhere.
Page Speed Optimization Categories
Now that we understand what kinds of things affect page speed, let’s consider how to address them. Later, I will talk specifics but now I am thinking big picture. One useful framework , known by the acronym PRPL, describes four categories of optimization efforts:
- Push (or preload) the most important resources
- Render the initial above the fold content (the portion of the webpage that is visible without scrolling) as soon as possible
- Pre-cache remaining assets
- Lazy load other resources and non-critical assets
Getting a little more specific, but not in the weeds yet, the following are common types of page speed optimizations:
- Caching (server and client)
- Optimizing and compressing images
- Minifying and compressing code
- Combining (or splitting) JavaScript and CSS files
- Deferring scripts that aren’t needed above the fold
- Inlining code that is needed above the fold
- Geographically distributing content and resources closer to the user (CDN)
- Self-hosting third-party scripts
- Preloading scripts needed above the fold
- Adaptive serving of resources based on the user’s device or network conditions
A CDN is a network of servers around the world which cache a copy of the static parts of your site and serve these to your site’s visitors from the server closest to their location, thus speeding up page loading time. Most CDNs—like the popular StackPath (formerly MaxCDN) and Amazon CloudFront—are paid products, but CloudFlare offers a good free plan. Some caching plugins only play nice with certain CDNs and some might not play nice with any. Be sure to read the documentation for both plugin and CDN.
If your cache plugin and your CDN both offer extra features, like minification, make sure to only use one. As with my advice for caching, I recommend you test a CDN last.
Be aware that some tests—notably WebPagetest and GTmetrix—account for the use of a CDN and some—notably Google’s PageSpeed Insights—don’t. Interestingly, a CDN can potentially make your site marginally slower on a “local” level but make it much faster when tested around the world.
A Methodology for Testing Optimization Plugins
Site optimization is a real challenge and there is no magic formula. The nature of your “solution” will depend on the speed factors identified above, the different tools and services you choose to use and whether or not you can do some light or heavy lifting (via custom coding) yourself. You goals will also play an important role. Are you looking for a simple solution (say, via one of two plugins) or are you comfortable mixing and matching tools? What score are you willing to accept?
I have found a great combination of manual tweaks and plugins and I will share those with you. But, here’s the thing. Your site is completely different than mine or anyone else’s. So, your challenges will be different as well. And, frankly, there are a lot of tools to choose from. The fact that I am not using a tool means fairly little—it might be a perfect solution for your site. Because of this, the best strategy is to develop a structured method for trying and comparing different tools. My recommended methodology is to first test available plugins and then make manual changes. Here is a step-by-step breakdown of the process.
Step 0: Backup Your Site and Use a Development Platform
Before starting, make sure your site is exactly the way you want it. That means configuring and deploying every plugin, every theme design tweak, every external service. Any change you make to your site can and will affect your optimization results. Thus, there is no point in testing before you are really ready.
Some of the plugins and changes you are going to investigate have the potential to screw up your site. Having a backup that can restore your site to its original, working state is a must. There are a lot of good backup plugins but my favorite is UpdraftPlus Backup and Restoration.
While having a backup is a must, the safest way to experiment with your site is with a development version. This requires a bit of effort and, if you install one on your personal computer you won’t be able to perform many online tests with it because they need a working URL (though using the Google PageSpeed Insights Extension for Chrome and/or Lighthouse from inside the Chrome Developer tools should work). See my guide for setting up a development version of your WordPress site.
Step 1: Create a Child Theme
The very first thing you should do—if you haven’t already—is create and activate a child theme. This might sound intimidating but it really is quite easy to do. If you plan to make any manual changes, it is absolutely necessary because every theme update will destroy any previous changes. I won’t go into details about this topic, but you can find everything you need to know via the official WordPress tutorial on how to create a child theme.
Step 2: Backup Again
If anything goes wrong with your child theme you can revert to your previous backup. Assuming everything goes smoothly, make another backup so you don’t have to repeat Step 1 if you need to restore a backup again later in the testing process.
Step 3: Decide Which Tests to Use
Choose the tests you want to use of those I identified earlier. I use PageSpeed Insights, Pingdom, WebPagetest and GTmetrix.
Step 4: Create a Spreadsheet to Track Your Work
If you are comfortable with spreadsheets, create one that tracks the tests you have chosen and the tools you choose. In one column list all the tools you try (include the base case you will run in the next step). Since many tools have multiple options, you may want to consider listing different tests with different settings. Just add a new row for each. In the top row, list the scores and the possible warnings you can get from the testing tool.
As you test, use the spreadsheet’s notes feature to make any relevant comments about a plugin. It is easy to forget things in the midst of so much testing, so notes are useful.
To make things easier, feel free to use this public Google spreadsheet I created based on my own testing.
Step 5: Get a Baseline Measure
Before choosing any tools to test or making any changes to your new child theme, test your site so you can track improvement and see what problems you need to address. Add the results to your spreadsheet on the first row.
Step 6: Enable Compression and Leverage Browser Caching
Two warnings you will most likely see in your base case test are “Enable text compression” and “Serve static assets with an efficient cache policy.” Some plugins can fix these errors, but you can also fix them easily by manually updating your .htaccess
file. Simply add the following lines to the top:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 | ################### Activate compression ###### AddOutputFilterByType DEFLATE text/plain AddOutputFilterByType DEFLATE text/html AddOutputFilterByType DEFLATE text/xml AddOutputFilterByType DEFLATE text/css AddOutputFilterByType DEFLATE application/xml AddOutputFilterByType DEFLATE application/xhtml+xml AddOutputFilterByType DEFLATE application/rss+xml AddOutputFilterByType DEFLATE application/javascript AddOutputFilterByType DEFLATE application/x-javascript ################# Activate browser caching ###### ExpiresActive On ExpiresDefault "access plus 1 month 1 days" ExpiresByType text/html "access plus 1 month 1 days" ExpiresByType image/gif "access plus 1 month 1 days" ExpiresByType image/jpeg "access plus 1 month 1 days" ExpiresByType image/png "access plus 1 month 1 days" ExpiresByType text/css "access plus 1 month 1 days" ExpiresByType text/javascript "access plus 1 month 1 week" ExpiresByType application/x-javascript "access plus 1 month 1 days" ExpiresByType text/xml "access plus 1 seconds" |
Fixing just these two warnings will make a fairly significant improvement to your speed and performance results. Thus, the plugins that do this for you will probably seem much better than others. For this reason, I prefer the manual fix and then I consider the new scores the base case. I think this gives me a better feeling for how much improvement the different plugins I try are making.
Step 7: Choose Your Tools
Now it is time to add some optimizations. These can usually be done via plugins and/or by modifying your theme. If you have been doing research you may already have a list of things you want to try. If not, I will list many throughout this guide. In either case you will probably want to consider solutions based on the page speed optimization categories I listed earlier.
Step 8: Organize and Prioritize Your Tests
The order you test your selected tools is not too important, but as I mentioned earlier I recommend testing non-caching options first because when you use a caching plugin you need to clear the cache anytime you make a change you wish to test.
I would prioritize simpler plugins first. If a plugin does only one thing and has few options, it will be easier to test and will give you a pretty immediate idea of its effectiveness. If you are going to test more complex plugins that contain the same functionality as a simpler plugin, consider testing just that functionality separately from all options combined.
You can test various plugins without fully understanding what they do, but the more you understand the better, especially because some options are fairly safe to use and some can cause major problems for your site. With this in mind, before you test any plugin, take time to understand its functions and options. Where appropriate, test different functions and combinations thereof separately. Enter every test you plan to run on its own row in your tracking spreadsheet. Include multiple rows covering various plugin options.
It will be tempting to jump ahead and start testing multiple plugins together. Stay disciplined and avoid this temptation. After you have all your individual plugin results you can review them and see which combinations of tools might solve the most warnings possible. Track these combinations on separate rows of your spreadsheet. If you use my public template you will see I have used different colors for individual plugins and combinations.
Note that some plugins can work together and others cannot. You will have to figure this out for your situation but generally, two caching plugins won’t work together. Also, two plugins that can both alter JavaScript or CSS probably won’t work together unless you use one for JS and the other for CSS.
Step 9: Test and Track (with Tips and Warnings)
Now, run your tests and enter the results in your spreadsheet. As you run more tests, hopefully you will start to learn a lot more about your theme and the plugins you use and how they might be affecting your performance. You may even decide to drop or change some plugins. Likewise, you will start to better understand the performance factors that PageSpeed Insights and other tests consider. You may also decide that you should test combinations of plugins and options you hadn’t previously considered. Just test them and add the results to your spreadsheet as you do.
To be thorough, you should test multiple pages. A home page or blog index page will typically be different from a post. And posts with many images and videos will perform differently than basic posts. Likewise, custom pages, search results, archive pages, etc. will all potentially perform differently and you should test them separately.
It may seem overkill to test multiple URLs separately, but it is the most accurate way of testing performance as different plugins handle different types of pages in different ways. Of course, being realistic, that is a lot of work. A compromise is to test one or two pages and later in the process as you narrow down your final tool choices go back and test more pages.
Speaking of being thorough, you should check to make sure a plugin doesn’t break your site. Like testing multiple pages, this can be a pain, so focus on one or two pages and then check more thoroughly later in the process after you narrow your plugin selection. When you do check, pay special attention to any features created by plugins. For example, if you use a ratings plugin or page view tracking plugin, those functions may be broken. Ad management plugins may or may not work properly, etc. Most servers create error logs (sometimes, conveniently, in the root and/or theme folder). Find out where yours is located and see if it has any errors listed.
Here are some other thing things to consider as you activate plugins and run your tests.
- Google is really strict about image optimization warnings. Even if you use a great plugin like EWWW Image Optimizer to optimize all your images, PageSpeed Insights will may not consider your efforts to be good enough. So, in your base case you will probably see an image optimization warning from the PageSpeed Insights test. Whether you want to fix these issues before continuing your tests or do so after is a personal decision. I opted for doing so later, but I think I should have done so at the beginning.
- A common responsive design technique is to change the navigation menu dynamically as the screen size changes. Usually this works just fine, but it is possible that some of the things you will do to try to optimize speed might break it so after making tweaks always resize your browser to make sure the responsive navigation and design is still working well.
- If you have member-only content, make sure you test that separately. Take special care to make sure that any caching presents both non-member and member views properly.
Sometimes while testing a plugin you may learn something useful that leads you to implement a manual tweak. This tweak may be specific to that plugin but it also might be a change you want to make in all circumstances. In the latter case, to be thorough you should repeat the tests you have already made, this time including the new manual change.
Step 10: Choose Your Final Plugin(s)
After you finish your plugin testing you should hopefully have a combination of plugins and their respective configurations that performs best for your site. Activate this combination and remove all the plugins you decide not to use.
Step 11: Manual Tweaks (Tackling Google PageSpeed Insights Tests)
You now have your preferred combination of plugins but almost certainly you have not gotten your 100 score yet. The only option left to close the gap is to make some manual tweaks. If you are satisfied with where you are now, feel free to call it a day. If not, read on for a look at the specific tests PageSpeed Insights performs and recommended coding changes you can make to fix warnings given by them. I will provide the necessary code and instruction so I think anyone can implement these. Leave a comment if anything I discuss is not clear enough.
Optimization: Function vs. Presentation
Throughout the guide I will offer some suggested code samples (snippets) to help optimize your site. I will generally talk about placing these in your child theme’s functions.php
file. That is common practice and works well, but common practice isn’t necessarily best practice.
Best practice is to use themes only to control presentation—that is the look and feel—of a site. To make back end changes that improve performance but don’t alter the site’s presentation, you should use a custom plugin. That way, if you ever change to a different theme, you won’t have to port those page speed modifications to it. And, by that time you may have forgotten the how, what, and why of those changes. (hint: always generously comment your coding work)
Creating a plugin is not difficult but is beyond the scope of this article. As an alternative, you could try the Code Snippets plugin. I haven’t used it myself but it seems to have some good features and might help you keep all your various modifications organized.
Stages of Page Speed Optimization
It seems that people progress in stages in their site performance optimization efforts, as follows.
- Only worry about Google’s PageSpeed Insights test and pick the low-hanging fruit (compression, browser caching, page caching).
- Still focus primarily on PageSpeed Insights but try to use extra tools and techniques to get more performance.
- Start to worry about multiple test sites, but only for the “big picture.”
- Start to notice the more detailed warnings spelled out in the various testing sites.
Finally, another key change point is whether you focus your optimization efforts on all pages or just the home page. That change could happen at any stage but a lot of people focus on just one page.
Understanding WordPress: Tackling Google PageSpeed Insights with Code Changes
Most of the manual coding changes I use involve working around issues with external scripts so I will first discuss at a high level how WordPress handles these by using the concepts of enqueuing/dequeuing and action and filter hooks.
When I (and others) talk about including code from a file other than the main HTML file loaded in your browser, I will often use the words script, file and resource interchangeably. A script should really only refer to a JavaScript file but common practice across the Web has it often used to refer to a CSS file as well. Likewise, an external resource is either kind of file.
Queuing Resources
Queuing is one of the most important concepts to understand when you want to really master WordPress. I doubt that is your goal, but since it is something we need to use for many of our manual performance tweaks, let me try to explain it.
Resources
Every theme and plugin potentially can include two types of resources, styles (CSS files) and JavaScripts. The core WordPress program has some as well (notably, jQuery). This system helps make WordPress very flexible and powerful but the problem is how to know what order to load all these resources. In some cases, any order might do just fine, but often one resource relies on another loading before it. Another concern is making sure that the same resource doesn’t load more than once. This can happen when two plugins use the same JavaScript file, which is especially the case with widely used jQuery.
In the old days—and, unfortunately, occasionally still today—developers would just write the code needed to include a file directly in their plugin functions. When done this way, WordPress—and other plugin developers—have no control over the order or nature of loading that resource. You can imagine the trouble that would result if every theme and plugin developer did this.
Enqueue
As a solution, WordPress has declared that the proper way to load a resource is by something called enqueuing. WordPress uses this system to create a “queue” of external files. It decides where to place a script in this queue with two enqueue method specifications.
A dependency specification tells WordPress what other scripts to load first. A priority specification determines if an external resource loads after or before any set with the default; a default priority is 10 so if you set a priority to be higher WordPress will load it earlier. Sometimes you can ignore both of these, but if your code requires that another resource load before you load one of your own, using one or both of these specifications is a good way to do that.
Dequeue
The WordPress developers also created a way to dequeue resources. This is crucial because it lets you “fix” problems that might occur when a plugin you are using has enqueued a resource in a way that adversely affects your site’s performance. Simply dequeue the problem resource and re-enqueue it with proper dependency or priority.
Later, we’ll see that some plugins and themes enqueue scripts on every page even though they may not be used at all or may be used only on certain pages. This is obviously not optimal and, thanks to the dequeue method we can fix this inefficiency.
Register vs. Enqueue
Technically speaking, there are two steps to enqueuing: first you register a resource then you load it. I won’t go into details about this but just know that enqueuing actually takes care of both registering and loading. The difference exists because there are some cases where you might want to register (declare) an external resource but actually load it in a different place. That is beyond the scope of this guide so don’t worry about it.
Handles
The last important thing to know about enqueuing in WordPress is the idea of a handle. Basically, this is just a nickname given for the script. Whenever you enqueue a script you need to specify a handle and the exact location of that script. Why both? Mostly for convenience. As I just mentioned, you can register and load a file separately. If you have already registered a script, WordPress knows its location so when you are ready to load it you can just reference the handle. Likewise, if you want to dequeue and re-enqueue a script, knowing the handle is enough.
List All Handles
So, how do we find handles used for our site? There are various ways to do this. Probably the easiest is to use a plugin like Debug This or Query Monitor. If you prefer a solution that doesn’t require a plugin, the following code—modified from code by the author of the Debug Objects plugin (which is no longer maintained)—in your child theme functions file will do the trick.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 | // -------------------------------------------------------------------------- // --- List all relevant handles (only for admin level) // -------------------------------------------------------------------------- function wp_list_enqueued( $handles = array() ) { global $wp_scripts, $wp_styles; ?> <style> TABLE { width:98%; margin:0 auto; } TD { background:#fff; } .toprow { background:#707070; font-weight:bold; } .style { background:#ddd; } .script { background:#e9e9e9; } </style> <?php // --- only show to logged-in admin if ( current_user_can( 'edit_posts' ) ){ // --- scripts foreach ( $wp_scripts -> registered as $registered ) $script_urls[ $registered -> handle ] = $registered -> src; // --- styles foreach ( $wp_styles -> registered as $registered ) $style_urls[ $registered -> handle ] = $registered -> src; // --- if empty if ( empty( $handles ) ) { $handles = array_merge( $wp_scripts -> queue, $wp_styles -> queue ); array_values( $handles ); } // --- output of values $scriptnum = $sytlenm = 0; $output = '<table>'; $output .= '<tr><td class="toprow">Order</td><td class="toprow">Handle</td><td class="toprow">URL</td></tr>'; foreach ( $handles as $handle ) { $output .= '<tr>'; if ( ! empty( $script_urls[ $handle ] ) ) { $scriptnum++; $output .= '<td class="script">' . $scriptnum . '</td><td class="script">' . $handle . '</td><td class="script">' . $script_urls[ $handle ] . '</td>'; } if ( ! empty( $style_urls[ $handle ] ) ) { $stylenum++; $output .= '<td class="style">' . $stylenum . '</td><td class="style">' . $handle . '</td><td class="style">' . $style_urls[ $handle ] . '</td>'; } $output .= '</tr>'; } $output .= '</table>'; echo $output; } } add_action( 'wp_print_footer_scripts', 'wp_list_enqueued' ); |
That will only list the handles (and file URLs) for a logged-in administrator so the whole world won’t see it. But, you still should remove or comment out this code once you have the information you need.
Finally, if you have a text editor that can search for text in multiple files (e.g., in a directory) like my favorite Notepad++, then just search for the filename you are concerned with. Somewhere you will see an enqueue
command that will include the handle name.
Action and Filter Hooks
To really master WordPress, in addition to understanding enqueuing you also need to understand hooks. In fact, you cannot enqueue or dequeue a file without a hook. Again, I will just give the big picture.
According to the official WordPress site, hooks “are provided by WordPress to allow your plugin to ‘hook into’ the rest of WordPress; that is, to call functions in your plugin at specific times, and thereby set your plugin in motion.” There are two kinds of hooks:
Basically, an action lets you add or remove code whereas a filter lets you modify (replace) data. Sometimes either will work, but depending on what you want to accomplish one is preferred (or required).
Whether you are adding or removing code via an action or replacing data via a filter, WordPress needs to know when to execute your code. That is what the hooks are for. WordPress builds a page in multiple steps and each of these steps is a hook. The steps that probably make the most sense to you would be to first load the header of the page, then the content, then any sidebars and finally the footer, but there are actually many more hooks that developers rely on.
Add_action and Add_filter
The add_action and add_filter commands implement actions and filters, respectively. These commands use some parameters, most notably the actual hook to use, the name of the function you wish to run, and the priority. In the code I presented above for listing handles, the last line is:
1 | add_action( 'wp_print_footer_scripts', 'wp_list_enqueued' ); |
This is an example of using the add_action
command. Here I am omitting the priority parameter because it is optional and not needed for what I want to accomplish. But on the topic of priority, I should clarify that the hook you use to enqueue is its own sort of priority. A high priority parameter used on an early hook runs before a low priority parameter on a later hook. There are best practices for which hooks to use to enqueue scripts (usually the wp_enqueue_scripts, wp_print_scripts, and wp_print_styles hooks). Knowing which hooks the relevant scripts on your site use will help you make any manual enqueuing changes.
Looking again at the line of code above, wp_print_footer_scripts
is the hook. This particular hook comes toward the end of the page building process (i.e., a very late step), which is what we want because we don’t want to miss any external scripts that are loaded in the footer (for reference, wp_footer
is the hook immediately prior to wp_print_footer_scripts
).
Don’t Worry If You Are a Bit Confused
Enqueuing and action/filter hooks are not easy concepts so if you got a bit lost in the last sections, don’t worry. Below, I will show you actual code changes I made that might also be useful for your site. Along the way, hopefully these concepts will start to make more sense.
Removing Unused Scripts and Styles
Themes and plugins shouldn’t include JavaScript and CSS files for unused features and functions but many do. These may or may not cause any PageSpeed Insights warnings, but to fully optimize your speed you should remove them. To do this, you will need to know the script handle(s).
For example, my theme offers a tab widget (popular posts, latest comments, etc.) that I don’t use. This widget relies on both a CSS file and a JavaScript file so I need to dequeue both. As it turns out, the handle for each is “theme_tab_widget.” I use the following code:
1 2 3 4 5 6 | function remove_assets() { // --- dequeue tab widget since I don't use it wp_dequeue_script( 'theme_tab_widget' ); wp_dequeue_style( 'theme_tab_widget' ); } add_action( 'wp_enqueue_scripts', 'remove_assets', 99999 ); |
Let’s take a quick look at what is happening here. Keep in mind the earlier discussion of action and filter hooks.
The first parameter in the add_action
line is the action hook (wp_enqueue_scripts
) that most theme and plugin developers use to enqueue their scripts. The second parameter is the remove_assets
function which we created in the lines above the add_action
command. The final parameter is the priority number. The default priority number for WordPress is 10 so using a number higher than that will run our function after all the default enqueuing finishes. That, of course, assumes that developers used the default. Some might set their own high value (say 20 or 99). So, we use a really high number (99999) just to be safe.
Note that we use two different wp_dequeue
commands. wp_dequeue_script
will dequeue a JavaScript file and wp_dequeue_style
will dequeue a CSS file. My theme’s tab widget uses both but often you will only be dealing with one or the other.
My theme ended up including four features (8 total files) that I don’t use. I have only illustrated the tab widget feature, but you can just replicate the dequeue commands for other unnecessary scripts.
Limiting Scripts and Styles to Certain Pages: Contact Form 7
Contact Form 7 is probably the most popular contact form for WordPress. If you are like me, you only use it on your actual contact page but it loads its script by default on every page, which is wasteful. So, like above, let’s dequeue it—but add some code to make sure it does not dequeue on the contact page. In the same remove_assets
function, add the following lines:
1 2 3 4 | // --- dequeue Contact Form 7 script on every page except contact page if ( !is_page("contact") ) { wp_dequeue_script( 'contact-form-7' ); } |
So the function now looks like:
1 2 3 4 5 6 7 8 9 10 11 | function remove_assets() { // --- dequeue tab widget since I don't use it wp_dequeue_script( 'theme_tab_widget' ); wp_dequeue_style( 'theme_tab_widget' ); // --- dequeue Contact Form 7 script on every page except contact page if ( !is_page("contact") ) { wp_dequeue_script( 'contact-form-7' ); } } add_action( 'wp_enqueue_scripts', 'remove_assets', 99999 ); |
The key here is you need to change “contact” to the permalink (page slug) for your contact page.
Dequeuing Tricky Scripts
One of my sites uses a pricing table plugin which was presenting me with two problems. First, it loads a JavaScript file on all pages even though I only use it on one page. Second—and I really don’t know why—this plugin’s CSS stylesheet was causing a warning from Google PageSpeed Insights.
Since the CSS involved is quite small, I decided to dequeue it and add the CSS styling to my main child stylesheet. And, the solution to the first problem is basically the same as for the Contact Form 7 script. But, in the end I was unable to dequeue either file.
After banging my head a bit I found out why. Remember my discussion of action hooks and how they form their own sort of priority? And, remember I said that the wp_enqueue_scripts
action hook is the one most theme and plugin developers use to enqueue their scripts? Well, apparently the developer of this plugin enqueued the files in an action hook which occurs after wp_enqueue_scripts
. The end result is that I couldn’t just include dequeue statements in my previous remove_assets
function. Instead I needed a new function that calls via a later action hook, as follows:
1 2 3 4 5 6 7 8 9 10 | function remove_assets_footer_scripts() { // --- dequeue Easy Pricing Tables stylesheet wp_dequeue_style( 'dh-ptp-design1' ); // --- dequeue pricing table match height script on all but home page if ( stripos($title, "home") === FALSE ) { wp_dequeue_script( 'matchHeight' ); } } add_action( 'wp_print_footer_scripts', 'remove_assets_footer_scripts', 0 ); |
This looks a lot like the previous code but I have replaced wp_enqueue_scripts
with wp_print_footer_scripts
.
If you run into similar problems try this action hook. Or, more generally, look at the official WordPress list of action hooks. The first section, “Actions Run During a Typical Request” lists in order the action hooks. The most common action hooks I see referenced in online discussions are (in order of precedence):
setup_theme
after_setup_theme
init
(Typically used by plugins to initialize. The current user is already authenticated by this time.)wp_enqueue_scripts
wp_print_styles
wp_print_scripts
wp_print_footer_scripts
Localize External Scripts and Resources
Sometimes our sites rely on scripts and resources from external servers for functionality we want. Common examples are fonts, ads, analytics, social media sharing buttons, and Gravatars.
Loading resources from third-party servers reduces page speed and can trigger PageSpeed Insights warnings. The most likely problems will be with the Eliminate render-blocking resources, Ensure text remains visible during webfont load, and Serve static assets with an efficient cache policy audits.
A good solution is to make a local copy of the resource and access it directly from out server. There are two types of resource we need to treat differently: files that are unlikely to change and those that might change regularly. In both cases we make a copy of the external file and store and retrieve it from our child theme but in the latter case we will need code to regularly update the file.
Localize Google Fonts
My theme, like many others, uses a Google font for its default. Since a font is not something likely to change often (if ever), it is a good candidate for localization.
First, as with our unnecessary scripts (e.g., tab widget) above, we need to dequeue the external reference. In the remove_assets
function from before, add the following line:
1 | wp_dequeue_style( 'theme-google-font-default' ); |
replacing theme-google-font-default
with whatever handle your theme is using for its Google font. Now our code looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | function remove_assets() { // --- dequeue tab widget since I don't use it wp_dequeue_script( 'theme_tab_widget' ); wp_dequeue_style( 'theme_tab_widget' ); // --- dequeue Contact Form 7 script on every page except contact page if ( !is_page("contact") ) { wp_dequeue_script( 'contact-form-7' ); } // --- dequeue default google font wp_dequeue_style( 'theme-google-font-default' ); } add_action( 'wp_enqueue_scripts', 'remove_assets', 99999 ); |
Creating a local copy is a bit more complicated, but not so bad. First, you need to visit the Google URL called by your theme. For example, my theme uses the Oswald font and loads it from:
If you visit that site you will see some @font-face
code including the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 | /* latin-ext */ @font-face { font-family: 'Oswald'; font-style: normal; font-weight: 400; src: url(https://fonts.gstatic.com/s/oswald/v31/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff2) format('woff2'); unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF; } /* latin */ @font-face { font-family: 'Oswald'; font-style: normal; font-weight: 400; src: url(https://fonts.gstatic.com/s/oswald/v31/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2) format('woff2'); unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; } |
We want to add this code to our child theme stylesheet. But, first notice that two different .woff
2 files are referenced in that code. Since these are also on external servers we will get warnings for them if we leave the code as is. So, simply visit each of these URLs and download the respective files to your child theme folder. I put mine in a folder I call “Oswald,” which is inside the “fonts” folder.
Now, edit the above code to be like this:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | /* --- Theme's default Google font -- */ /* latin-ext */ @font-face { font-family: 'Oswald'; font-style: normal; font-weight: 400; src: url(fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff2) format('woff2'); unicode-range: U+0100-024F, U+0259, U+1E00-1EFF, U+2020, U+20A0-20AB, U+20AD-20CF, U+2113, U+2C60-2C7F, U+A720-A7FF; } /* latin */ @font-face { font-family: 'Oswald'; font-style: normal; font-weight: 400; src: url(fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2) format('woff2'); unicode-range: U+0000-00FF, U+0131, U+0152-0153, U+02BB-02BC, U+02C6, U+02DA, U+02DC, U+2000-206F, U+2074, U+20AC, U+2122, U+2191, U+2193, U+2212, U+2215, U+FEFF, U+FFFD; } |
That’s it. Of course, if you put them in the main folder or a folder called something other than “fonts” you will need to change that part of the above code.
Localize Font Awesome
Font Awesome is a popular tool for theme designers. The problem is that it is usually referenced from the original external source and thus will produce a warning like our Google font did.
The solution is to load this from your child theme like we did with the Google Font. But, Font Awesome is slightly more complicated than a single Google Font so you will actually need to download it as a .zip file:
https://fortawesome.github.io/Font-Awesome/
Simply unzip it to your child theme (like the Google font, I put it in my fonts folder) and then enqueue it as follows:
1 | wp_enqueue_style( 'font-awesome', get_stylesheet_directory_uri() . '/fonts/font-awesome-4.7.0/css/font-awesome.min.css' ); |
Of course, as with our Google font, we first need to dequeue and deregister the previously enqueued instance. But, here is where it gets interesting. Remember I just said it is an increasingly popular tool? Well, that means your theme and one or more plugins might all load it. In my case, three plugins enqueue Font Awesome. Two use the ‘font-awesome’ handle so if I just dequeued that handle I would have been fine. A third plugin, however, enqueues it with a handle called ssbp-font-awesome
so I had to dequeue that one separately.
Here is what my code looks like:
1 2 3 4 5 6 7 8 9 10 | // --- dequeue font-awesome loaded by Simple Share Buttons Adder plugin wp_dequeue_style( 'ssbp-font-awesome' ); wp_deregister_style( 'ssbp-font-awesome' ); // --- dequeue font-awesome wp_dequeue_style( 'font-awesome' ); wp_deregister_style( 'font-awesome' ); // --- enqueue local copy of font-awesome wp_enqueue_style( 'font-awesome', get_stylesheet_directory_uri() . '/assets/fonts/font-awesome-4.7.0/css/font-awesome.min.css' ); |
I just add that to the remove_assets
function from before.
That takes care of loading Font Awesome locally, but there are more things you can do to optimize Font Awesome. The easiest thing is to not load all the various webfonts formats by default.
If you look at the font-awesome.css
and font-awesome.min.css
files you will see that @font-face
CSS includes .eot
, .woff
, .woff2
, .ttf
, and .svg
files. Why? In the answer to a question on StackExchange, Rich Bradshaw offers a good overview of these file types. His advice is that you only need one, preferably .woff2
. So, simply edit the font-awesome.min.css
file and remove the url('... ...'),
code for all but the .woff2
file.
Sharath at WebJeda offers a couple of ways to do get even more Font Awesome size savings. I personally hesitate to do so because plugins may need some of the icons that I would delete because my theme doesn’t need them. But, if you know you have no plugins that rely on it, give it a try.
font-display: swap
and preloading.Localize Minified JavaScript Files
A couple of JavaScript files—one from my theme and one from a plugin—are not minified. That’s inefficient and will cause a PageSpeed Insights warning.
Although it sounds a bit odd, my solution is localize these local scripts; that is, make a minified copy in my child theme (in my assets/js/
directory). I could just minify the originals but any update of the theme or plugin would replace the files. Again, I use the remove_assets
function.
1 2 3 4 5 6 7 8 9 | // --- dequeue placeholders.js loaded by theme and load my own minified copy wp_dequeue_script( 'theme-placeholders' ); wp_deregister_script( 'theme-placeholders' ); wp_enqueue_script( 'theme-placeholders', get_stylesheet_directory_uri() . '/assets/js/placeholders.min.js' ); // --- dequeue jquery.fancybox-1.3.4.js loaded by Responsive Lightbox plugin and load my own minified copy wp_dequeue_script( 'responsive-lightbox-fancybox' ); wp_deregister_script( 'responsive-lightbox-fancybox' ); wp_enqueue_script( 'responsive-lightbox-fancybox', get_stylesheet_directory_uri() . '/assets/js/jquery.fancybox-1.3.4.min.js' ); |
So, now my function looks like:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 | function remove_assets() { // --- dequeue tab widget since I don't use it wp_dequeue_script( 'theme_tab_widget' ); wp_dequeue_style( 'theme_tab_widget' ); // --- dequeue Contact Form 7 script on every page except contact page if ( !is_page("contact") ) { wp_dequeue_script( 'contact-form-7' ); } // --- dequeue default google font wp_dequeue_style( 'theme-google-font-default' ); // --- dequeue font-awesome loaded by Simple Share Buttons Adder plugin wp_dequeue_style( 'ssbp-font-awesome' ); wp_deregister_style( 'ssbp-font-awesome' ); // --- dequeue font-awesome wp_dequeue_style( 'font-awesome' ); wp_deregister_style( 'font-awesome' ); // --- enqueue local copy of font-awesome wp_enqueue_style( 'font-awesome', get_stylesheet_directory_uri() . '/assets/fonts/font-awesome-4.3.0/css/font-awesome.min.css' ); // --- dequeue placeholders.js loaded by theme and load my own minimized copy wp_dequeue_script( 'theme-placeholders' ); wp_deregister_script( 'theme-placeholders' ); wp_enqueue_script( 'theme-placeholders', get_stylesheet_directory_uri() . '/assets/js/placeholders.min.js' ); // --- dequeue jquery.fancybox-1.3.4.js loaded by Responsive Lightbox plugin and load my own minimized copy wp_dequeue_script( 'responsive-lightbox-fancybox' ); wp_deregister_script( 'responsive-lightbox-fancybox' ); wp_enqueue_script( 'responsive-lightbox-fancybox', get_stylesheet_directory_uri() . '/assets/js/jquery.fancybox-1.3.4.min.js' ); } add_action( 'wp_enqueue_scripts', 'remove_assets', 99999 ); |
Localize Google Analytics
Google Analytics is another external file that will cause PageSpeed Insights failures and thus is worth localizing. Unfortunately, unlike fonts, the GA file gets updated somewhat frequently. Thus, it’s not enough to just manually download a copy as we will surely forget to update it regularly. Instead, we need a way to automatically download new copies on a schedule.
As with Google fonts, when I first wrote this guide there was no plugin to do this so I wrote my own code and run it via a cron job twice a week. I will list that below without explaining it for anyone interested. But, now there is a good GA localization plugin called Complete Analytics Optimization Suite (CAOS). My code has worked well for me for 5+ years so I haven’t tried it, but it looks excellent.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 | // ------------------------------------------------------------------------------- // --- Localize external scripts to avoid Google PageSpeed Insights warnings // --- Note: I am localizing two different GA files and Google AdSense file // ------------------------------------------------------------------------------- function localize_external_scripts () { // --- array of all the external files we want to localize and whether they should be minimized or not (minify, orig url, filename,handle) $external_files = array(); $external_files[] = array("nominify","http://www.google-analytics.com/analytics.js","analytics.js",""); $external_files[] = array("nominify","http://www.googletagmanager.com/gtag/js?id=".$GLOBALS['ua_id'],"gtag.js",""); $external_files[] = array("nominify","http://pagead2.googlesyndication.com/pagead/js/adsbygoogle.js","adsbygoogle.js",""); $i = 0; foreach ($external_files as $external_file) { $i++; $url = $external_file[1]; $filename = get_stylesheet_directory() . "/js-external/" . $external_file[2]; $handle = $external_file[3]; $body .= "<strong>Filename</strong>: " . $external_file[2] . "<br />"; // --- Write to local file clearstatcache(); $src = fopen($url, 'r') or $body .= "File does not exist - <strong>PLEASE INVESTIGATE</strong><br>"; $dest = fopen($filename, 'w'); $bytescopied = stream_copy_to_stream($src, $dest) . " bytes copied.\n"; $body .= " ... " . $bytescopied . "<br />"; // --- Get file content $filecontent = file_get_contents($filename); // --- replace reference to Google's server to my localized copy of analytics.js $filecontent = str_replace("https://www.google-analytics.com/analytics.js", get_template_directory_uri() . "/js-external/analytics.js", $filecontent); $filecontent = str_replace("//www.google-analytics.com/analytics.js", get_template_directory_uri() . "/js-external/analytics.js", $filecontent); // --- write the entire string file_put_contents($filename, $filecontent); // --- Just to be safe, check that file has content $contents = file_get_contents($filename); if($contents === FALSE) $body .= " ... <strong>WARNING</strong>: couldn't read this file ... check to make sure everything is OK<br />"; if (strpos($contents, "function") === FALSE) { $body .= " ... <strong>WARNING</strong>: this file has no <code>function</code> ... check to make sure everything is OK<br />"; } else { $body .= " ... This file has a <code>function</code> ... everything seems OK<br />"; } if ($i<count($external_files)) $body .= "<br />"; } return $body; } |
Daan van den Bergh, the developer of the CAOS plugin, wrote a useful article exploring the differences between analytics.js, gtag.js & ga.js. The quick summary is that
ga.js
is old so don’t bother using it. It was replaced by analytics.js
and that mostly just does analytics so is probably your best best for page speed optimization. analytics.js
is still active but gtag.js
is what Google recommends using now. If you want to integrated Google services besides Google Analytics or you want to do advanced campaign or usage tracking, gtag.js
is what you want. Otherwise, stick with analytics.js
.If you decide to switch to the new gtag GA tracking code, there is one big page speed issue to be aware of. The Remarketing and Advertising Reporting Features provide demographic information on your site’s visitors. This feature is not enabled by default but if you enable it (in the Tracking Code -> Data Collection section of GA), you will get a Minimize Redirects warning from Pingdom.
As far as I can tell, there is no way to keep this feature and eliminate the warning. So, your choices are: accept a redirect chain penalty or stop hosting the GA script locally and accept a Leverage Browser Caching error.
Breaking Down PageSpeed Insights Tests
Now let’s turn our attention to code modifications you can make to bump up your PageSpeed Insights score. To do so, let’s take a look at what PSI (Lighthouse) cares about and discuss solutions for each.
In addition to presenting your overall performance score, the PageSpeed Insights results page lists the Lighthouse performance audits in four sections: Lab Data (Metrics), Opportunities, Diagnostics, and Passed Audits.
PageSpeed Insights Lab Data (Metrics)
Lab data presents the Lighthouse test results used to calculate your overall performance score. Let’s take a look at these tests.
In late May 2020, PageSpeed Insights switched from Lighthouse version 5 to version 6, making some significant changes. Most notably, it replaced the First Meaningful Paint (FMP), First CPU Idle, and Max Potential First Input Delay (Max FID) audits with the Largest Contentful Paint (LCP), Total Blocking Time and Cumulative Layout Shift audits. The weighting changed as well.
Test | v5 Weight | Test | v6 Weight |
FCP (First Contentful Paint) | 20% | FCP (First Contentful Paint) | 15% |
SI (Speed Index) | 26.7% | SI (Speed Index) | 15% |
TTI (Time to Interactive) | 33.3% | TTI (Time to Interactive) | 15% |
FMP (First Meaningful Paint) | 6.7% | LCP (Largest Contentful Paint) | 25% |
FCI (First CPU Idle) | 13.3% | TBT (Total Blocking Time) | 25% |
CLS (Cumulative Layout Shift) | 5% |
First Contentful Paint (FCP)
The PageSpeed Insights (Lighthouse) First Contentful Paint metric, displayed in seconds, measures how long it takes the browser to render the first piece of DOM content after a user navigates to your page. Images, non-white <canvas>
elements, and SVGs on your page are considered DOM content; anything inside an iframe isn’t included.
Your FCP score is a comparison of your page’s FCP time and FCP times for real websites, based on data from the HTTP Archive. For example, sites performing in the ninety-ninth percentile render FCP in about 1.5 seconds. If your website’s FCP is 1.5 seconds, your FCP score is 99.
This table shows how to interpret your FCP score:
FCP time (in seconds) | Color-coding | FCP score (HTTP Archive percentile) |
---|---|---|
0–2 | Green (fast) | 75–100 |
2–4 | Orange (moderate) | 50–74 |
Over 4 | Red (slow) | 0–49 |
Font Loading
One issue that’s particularly important for FCP is font load time. Some browsers hide text until the font loads, causing a flash of invisible text (FOIT). There are two ways to avoid this: one is very simple but does not have universal browser support; the second is more complicated but has full browser support.
Option #1: Use font-display
The simple solution is to temporarily show a system font by including font-display: swap
in your @font-face
style.
1 2 3 4 5 6 7 | @font-face { font-family: 'Pacifico'; font-style: normal; font-weight: 400; src: local('Pacifico Regular'), local('Pacifico-Regular'), url(https://fonts.gstatic.com/s/pacifico/v12/FwZY7-Qmy14u9lezJ-6H6MmBp0u-.woff2) format('woff2'); font-display: swap; } |
font-display: swap
tells the browser that text using this font should display immediately using a system font. Once the custom font is ready, the system font is swapped out. If a browser does not support font-display
, it continues to follow it’s default behavior for loading fonts. Most modern browsers support font-display
but if you are concerned, check the list of supporting browsers.
Option #2: Wait to use custom fonts until they are loaded
With a bit more work, the same behavior can be implemented to work across all browsers. There are three parts to this approach:
- Don’t use a custom font on initial page load. This ensures that the browser displays text immediately using a system font.
- Detect when your custom font is loaded. You can accomplish this with a couple lines of JavaScript code, thanks to the Font Face Observer library.
- Update page styling to use the custom font.
Here are the changes you can expect to make in order to implement this:
- Modify your CSS to not use a custom font on initial page load.
- Add a script to your page. This script detects when the custom font loads and then updates the page styling.
Visit this Google code lab for a complete guide.
Note: I haven’t tried the second option myself, because (1) I think font-display
support is wide enough to justify using it and (2) I wonder if the addition of so much JavaScript doesn’t offset the font loading gains. If you have any thoughts or knowledge about this please share in the comments.
Speed Index (SI)
The PageSpeed Insights (Lighthouse) Speed Index metric measures how quickly content is visually displayed during page load. PageSpeed Insights first captures a video of the page loading in the browser and computes the visual progression between frames.
Your Speed Index score is a comparison of your page’s speed index and the speed indices of real websites, based on data from the HTTP Archive.
This table shows how to interpret your Speed Index score:
Speed Index (in seconds) | Color-coding | Speed Index score |
---|---|---|
0–4.3 | Green (fast) | 75–100 |
4.4–5.8 | Orange (moderate) | 50–74 |
Over 5.8 | Red (slow) | 0–49 |
While anything you do to improve page load speed will improve your Speed Index score, addressing any issues discovered by the following Diagnostic audits should have a particularly big impact:
- Minimize main thread work
- Reduce JavaScript execution time
- Ensure text remains visible during webfont load
Time to Interactive (TTI)
The PageSpeed Insights (Lighthouse) Time to Interactive metric measures how long it takes a page to become fully interactive. A page is considered fully interactive when:
- The page displays useful content, measured by the First Contentful Paint,
- Event handlers are registered for most visible page elements, and
- The page responds to user interactions within 50 milliseconds.
Measuring TTI is important because some sites optimize content visibility at the expense of interactivity. This can create a frustrating user experience: the site appears to be ready, but when the user tries to interact with it, nothing happens.
The TTI score is a comparison of your page’s TTI and the TTI for real websites, based on data from the HTTP Archive. For example, sites performing in the ninety-ninth percentile render TTI in about 2.2 seconds. If your website’s TTI is 2.2 seconds, your TTI score is 99.
This table shows how to interpret your TTI score:
TTI metric (in seconds) | Color-coding | TTI score (HTTP Archive percentile) |
---|---|---|
0–5.2 | Green (fast) | 75–100 |
5.3–7.3 | Orange (moderate) | 50–74 |
Over 7.3 | Red (slow) | 0–49 |
One improvement that can have a particularly big effect on TTI is deferring or removing unnecessary JavaScript work. Look for opportunities to optimize your JavaScript. In particular, consider reducing JavaScript payloads with code splitting and applying the PRPL pattern. Optimizing third-party JavaScript also yields significant improvements for some sites.
These two Diagnostic audits provide additional opportunities to reduce JavaScript work:
- Minimize main thread work
- Reduce JavaScript execution time
Total Blocking Time (TBT)
The PageSpeed Insights (Lighthouse) Total Blocking Time metric measures the total amount of time that a page is blocked from responding to user input. This includes mouse clicks, screen taps, or keyboard presses. The sum is calculated by adding the blocking portion of all long tasks between First Contentful Paint and Time to Interactive. Any task that executes for more than 50 ms is a long task. The amount of time after 50 ms is the blocking portion. For example, if Lighthouse detects a 70 ms long task, the blocking portion would be 20 ms.
Your TBT score is a comparison of your page’s TBT time and TBT times for the top 10,000 sites when loaded on mobile devices. The top site data includes 404 pages.
This table shows how to interpret your TBT score:
TBT time (in milliseconds) | Color-coding |
---|---|
0–300 | Green (fast) |
300-600 | Orange (moderate) |
Over 600 | Red (slow) |
See What is causing my long tasks? to learn how to diagnose the root cause of long tasks with the Performance panel of Chrome DevTools.
In general, the most common causes of long tasks are:
- Unnecessary JavaScript loading, parsing, or execution. While analyzing your code in the Performance panel you might discover that the main thread is doing work that isn’t really necessary to load the page. Reducing JavaScript payloads with code splitting, removing unused code, or efficiently loading third-party JavaScript should improve your TBT score.
- Inefficient JavaScript statements. For example, after analyzing your code in the Performance panel, suppose you see a call to
document.querySelectorAll('a')
that returns 2000 nodes. Refactoring your code to use a more specific selector that only returns 10 nodes should improve your TBT score.
Largest Contentful Paint (LCP)
The PageSpeed Insights (Lighthouse) Largest Contentful Paint metric measures when the largest content element in the viewport is rendered to the screen. This approximates when the main content of the page is visible to users. It’s useful because metrics like First Contentful Paint (FCP) only capture the very beginning of the loading experience. If a page shows a splash screen or displays a loading indicator, this moment is not very relevant to the user.
First Meaningful Paint (FMP) and Speed Index (SI) help capture more of the loading experience after the initial paint. These metrics are, however, complex and often wrong. LCP is a simple and more accurate way to measure when the main content of a page is loaded.
The table below shows how to interpret your LCP score:
LCP time (in seconds) | Color-coding |
---|---|
0-2 | Green (fast) |
2-4 | Orange (moderate) |
Over 4 | Red (slow) |
The most common causes of a poor LCP are:
- Slow server response times
- Render-blocking JavaScript and CSS
- Slow resource load times
- Client-side rendering
For a deep dive on how to improve LCP, see Optimize LCP.
Cumulative Layout Shift (CLS)
The PageSpeed Insights (Lighthouse) Cumulative Layout Shift metric relates to unexpected movement of page content. This usually happens when resources load asynchronously or DOM elements get dynamically added to the page above existing content. The culprit might be an image or video with unknown dimensions, a font that renders larger or smaller than its fallback, or a third-party ad or widget that dynamically resizes itself.
CLS helps you address this problem by measuring the sum total of all individual layout shift scores for every unexpected layout shift that occurs during the entire lifespan of the page.
A layout shift occurs any time a visible element changes its position from one frame to the next.
To calculate the layout shift score, the browser looks at the viewport size and the movement of unstable elements in the viewport between two rendered frames. The layout shift score is a product of two measures of that movement: the impact fraction and the distance fraction.
1 | layout shift score = impact fraction * distance fraction |
Not all layout shifts are bad. A layout shift is only bad if the user isn’t expecting it. On the other hand, layout shifts that occur in response to user interactions (clicking a link, pressing a button, typing in a search box and similar) are generally fine, as long as the shift occurs close enough to the interaction that the relationship is clear to the user.
To provide a good user experience, sites should strive to have a CLS score of less than 0.1. To ensure you’re hitting this target for most of your users, a good threshold to measure is the 75th percentile of page loads, segmented across mobile and desktop devices.
For most websites, you can avoid all unexpected layout shifts by sticking to a few guiding principles:
- Always include size attributes on your images and video elements, or otherwise reserve the required space with something like CSS aspect ratio boxes. This approach ensures that the browser can allocate the correct amount of space in the document while the image is loading. Note that you can also use the unsized-media feature policy to force this behavior in browsers that support feature policies.
- Never insert content above existing content, except in response to a user interaction. This ensures any layout shifts are expected.
- Prefer transform animations to animations of properties that trigger layout changes. Animate transitions in a way that provides context and continuity from state to state.
PageSpeed Insights Opportunities
The Opportunities section lists areas that, if addressed, could improve the various metrics results. The more significant the opportunity, the greater the effect it will have on your Performance score.
The opportunities section has suggestions and linked documentation on how to implement them. Below are the opportunities you might see.
Eliminate render-blocking resources
Render-blocking resources are CSS, JavaScript, or HTML files that “block” the rendering of a page until completely downloaded and parsed. Any delays in downloading or any errors in the script will cause more delay.
Render blocking is especially inefficient—and annoying for users—when above the fold content must wait for resources that aren’t needed.
For example, consider ads displayed below the fold. The code for those must load and process before you can see the above the fold content. The same goes for social sharing buttons, a live Twitter feed in the sidebar, etc.
The Eliminate render-blocking resources audit lists all blocking scripts. It is probably the PageSpeed Insights warning that most exasperates us when trying to improve our score.
To get more specific, PageSpeed Insights (Lighthouse) flags the following:
A <script>
tag that:
- Is in the
<head>
of the document. - Does not have a
defer
orasync
attribute.
A <link rel="stylesheet">
tag that:
- Does not have a
disabled
attribute. When this attribute is present, the browser does not download the stylesheet. - Does not have a
media
attribute that matches the user’s device.
A <link rel="import">
tag that does not have an async
attribute.
Reducing Render-Blocking
The ideal way to reduce the impact of render-blocking resources is to identify what’s critical and what’s not. Once you identify critical code, move it from the render-blocking URL to inside the HTML of your page.
There are four problems with trying to eliminate render blocking the “ideal” way:
- It is hard to do. WordPress, your theme, and the plugins you use all generate code so finding and controlling critical content can be a challenge.
- Critical code can differ across pages on our site.
- Using hooks, enqueuing and dequeuing, we can work with entire files but removing critical pieces of individual files is more challenging.
- If your scripts are not small in size you will have a large HTML page to download. A large page won’t benefit from browser caching because we don’t generally tell browsers to cache HTML documents. That’s because they frequently update with new content.
Given the above, here are the best ways to solve render blocking issues.
Inline your external scripts
I just said that this is hard to do. But, hard isn’t impossible, especially if your site is simple without many plugins and if all your pages are similar.
The main candidate for inlining is the CSS needed to render above the fold content. Identify that CSS and move it from the main style.css
file into your header.php
file. Don’t forget to use your child theme to protect your work from theme updates.
There are tools that can help you find the critical CSS (Sitelocity critical CSS generator, criticalpathcssgenerator, and this Chrome bookmarket). You can also use the Coverage Tool in Chrome. Some plugins, like LiteSpeed Cache and Autoptimize (working with CriticalCSS), offer a feature to do this automatically. If you try that, test thoroughly before using.
Though CSS is the most likely candidate for inlining, JavaScript code may make sense as well.
For example, my theme’s responsive navigation menu uses a small amount of JavaScript loaded from two different files. Loading those scripts results in a render blocking warning. I could use defer
or async
to eliminate the warning (see below) but then the navigation menu (above the fold content) will have to wait to load.
To deal with this, I decided to place that code inline at the end of my pages. I did this by combining the code from the two relevant JavaScript files. Then I used an online JavaScript compression tool to minify the combined code. Finally, I took that minified code (just three lines) and added it to the bottom of my footer.php
file in my child theme right before the </body>
tag.
Load JavaScripts at the end of a page
The main reason to do this is that the browser fetches and executes scripts immediately when it encounters them. Thus, the HTML page cannot finish rendering until the script executes. Moving these scripts to the end of a page will improve site speed considerably.
The problem with doing this is that some site functionality requires loading of certain scripts before working. For this reason, Patrick Sexton recommends that you separate your JavaScript into two groups: JavaScript the page needs to load (jQuery is a common one) and JavaScript that is doing stuff after the page loads. Patrick’s JavasSript usage tool can help you with this task.
Fortunately, some optimization plugins give you the option to specify which scripts not to move to the bottom of the page.
You can also load CSS files at the end of a page, but this often causes the user to start seeing the page content without any styles because the browser hasn’t loaded and processed rules from the external stylesheet yet. This situation is called flash of unstyled content (FOUC).
Defer load JavaScripts or load them asynchronously
Loading a JavaScript file with a defer
or async
attribute can eliminate render blocking. That’s because it tells the browser to load and process all HTML/CSS before loading and processing the JavaScript file.
Using either of these attributes, page elements requiring that JavaScript won’t function until after the script is loaded and executed. Thus, you should only use this technique for JavaScript files that aren’t needed to render above the fold content.
What’s the difference between defer
and async
? With async
, the browser downloads the JavaScript while still parsing the rest of the HTML. That is, it doesn’t completely stop parsing while the file downloads. However, it will pause the HTML parser to execute the script once it downloads. With defer
the browser also downloads the JavaScript while still parsing the rest of the HTML. However, it then waits to execute the script until the HTML parsing is finished.
The two main implications are that async
scripts can slow down parsing a bit (not a problem if below the fold) and may or may not execute in the order in which they occur in the page. Deferred (defer
) scripts, in contrast, execute in the order they occur in the page but only after the browser completely processes them.
With either defer
or async
, it shouldn’t really matter where you put the code (in the header or the footer), though order might matter. So, which should you use? Try defer
and test to make sure all functionality on your site still works. If not, consider re-ordering and trying again or selectively try async
.
Combine external scripts
Some optimization plugins let you combine multiple JavaScript—or, alternatively, CSS—files into one. This combined file will still trigger a render blocking warning unless you move it to the footer and/or load it asynchronously.
Aggregating scripts is useful with HTTP/1.1 technology because it is must download multiple files sequentially. HTTP/2, in contrast, allows for parallel downloading. Thus, the argument is that HTTP/2 will take longer to download one large file than multiple small ones. Also, since the aggregation is done for each page, some reusable code (like jQuery) will end up being bundled into many different per-page aggregate files, thus reducing the ability of a client browser to simply cache that reusable code. That is inefficient.
The Autoptimize FAQ references three articles which indicate that it is premature to rely on HTTP/2 over the aggregating files technique (all three are worth a read). Two of those articles are from late 2015, so I don’t know if things have changed since then. The third article is from early 2017, but its test cases included no optimization; aggregation and minification; and aggregation, minification and a CDN. Since the core argument is to NOT aggregate if you are using HTTP/2, I would have liked to see a test case comparing HTTP/1 and HTTP/2 with just minification (no aggregation).
I imagine there are a lot of articles on the topic, but ultimately I like the conclusion offered on the AO FAQ page:
Configure, test, reconfigure, retest, tweak and look what works best in your context. Maybe it’s just HTTP/2, maybe it’s HTTP/2 + aggregation and minification, maybe it’s HTTP/2 + minification.
Removing anything unused
Many JavaScript files have multiple functions, not all of which are necessarily used on any single page. Ideally, you should removed unused code from these files to increase page speed. In reality, this is as difficult as inlining code and probably not worth worrying about.
As for CSS files, they usually include a lot of unused styling rules, often because:
- Themes that are customizable have massive additional unnecessary CSS.
- Theme designers often use more CSS than required to make their jobs easier.
- Frameworks, like themes, include CSS that is not tailored to the resulting website.
If you use a custom theme, ask the designer if a slimmer version is available. If you are considering a new theme, make optimized CSS delivery a requirement.
Google AdSense
Google AdSense does not load asynchronously by default so it will fail the render blocking audit. You can easily change that by adding the async
tag via your ad management plugin, like follows:
1 | <script async src="//pagead2.googlesyndication.com/pagead/js/adsbygoogle.js"></script> |
Properly size images
Properly size images lists all images in your page that aren’t appropriately sized, along with the potential savings in kilobytes (KB). For each image on the page, PageSpeed Insights (Lighthouse) compares the size of the rendered image to the actual size of the image. The rendered size also accounts for device pixel ratio. If the rendered size is at least 25KB smaller than the actual size, then the image fails the audit.
The main strategy for serving appropriately-sized images is “responsive images.” With responsive images, you generate multiple versions of each image. You then specify which version to use in your HTML or CSS using media queries, viewport dimensions, and so on.
If you don’t use WordPress or for some reason are failing this audit, make your images responsive. To do so, specify multiple image versions and the browser will choose the best one to use.
Before | After |
---|---|
<img src=”flower-large.jpg”> | <img src=”flower-large.jpg” srcset=”flower-small.jpg 480w, flower-large.jpg 1080w” sizes=”50vw”> |
The <img>
tag’s src
, srcset
, and sizes
attributes all interact to achieve this end result.
The “src” attribute
The src attribute makes this code work for browsers that don’t support the srcset
and sizes
attributes. If a browser does not support these attributes, it will fall back to loading the resource specified by the src
attribute. So, make sure the image specified by src
is large enough to work well on all device sizes.
The “srcset” attribute
The srcset
attribute is a comma-separated list of image filenames and their width or density descriptors.
This example uses width descriptors, which are just a way to tell the browser the width of an image. This saves the browser from needing to download the image to determine its size.
Width descriptors use the w
unit (instead of px
). 480w
tells the browser that flower-small.jpg
is 480px wide; 1080w
tells the browser that flower-large.jpg
is 1080px wide.
The “sizes” attribute
The sizes attribute tells the browser how wide the image will be when displayed. However, the sizes attribute has no effect on display size; you still need CSS for that.
The browser uses this information, along with what it knows about the user’s device (i.e. its dimensions and pixel density), to determine which image to load.
If a browser does not recognize the “sizes
” attribute, it will fallback to loading the image specified by the “src
” attribute.
sizes
can be specified using a variety of units. The following are all valid sizes:
100px
33vw
(this is the % of view width)20em
(this is a multiple of your font size)calc(50vw-10px)
The following is not a valid size:
25%
(the sizes attribute cannot use percentages)
If all that seems a bit complicated, an alternative strategy is to use vector-based image formats, like SVG. With a finite amount of code, an SVG image can scale to any size.
Defer offscreen images
Defer offscreen images lists all offscreen or hidden images in your page along with the potential savings in kilobytes (KB). Consider lazy-loading these images after all critical resources have finished loading to lower Time to Interactive.
Efficiently encode images (Optimize Images)
Efficiently encode images lists all unoptimized images, with potential savings in kilobytes. Optimize these images so that the page loads faster and consumes less data.
PageSpeed Insights (Lighthouse) collects all the JPEG or BMP images on the page, sets each image’s compression level to 85, and then compares the original version with the compressed version. If the potential savings are 4KB or greater, it flags the image as optimizable.
There are many steps you can take to optimize your images, including the following.
Compressing images
There are many WordPress plugins available to compress your images. These typically can bulk compress your existing images and compress each new one during the upload process. For the record, I prefer the EWWW Image Optimizer plugin for compression.
If you are using Jetpack you might want to try the free Site Accelerator (formerly Photon) service. It will not only compress your images but also serve as a completely free CDN for them.
One thing to note is that image optimizing plugins only apply to files in your Media Library. You will need to manually compress anything not in your library.
Besides plugins, there are a lot of useful tools to compress images, both online and offline. My favorites are the ones that will perform multiple levels of compression and show you each so you can easily decide which one has the best trade-off of size and image quality. Below are some online compression tools worth considering:
- Compress PNG did a good job with the one file I tested but have to download to see the full result. It also has a JPEG option.
- Compress Image is my current favorite because it handles multiple image types and lets you select the amount of optimization you want.
- Compressor.io is a simple online tool. Just drag any image you want to compress and wait for the result. It’s not an option for bulk compression but it does a nice job. One warning: if you try to compress an image more than once, Compressor.io seems to work but will just download the first, cached, optimization. One way to get around this is to make a copy of the image, change its name and submit that.
- Dynamic Drive Online Image Optimizer will optimize and also lets you convert from one format to another. The output shows you the various levels of optimization and you can choose the best for your needs.
- JPEG – Optimizer let’s you specify the compression level.
- TinyJPG provided great results for the one file I tested but you have to download the converted file to see it.
There are many offline options as well. One I recently read good things about is PNGQuant; specifically, I read that it does a good job of eliminating PageSpeed Insights warnings when using the 80-90 setting.
If you use GTmetrix, note that it provides a link to download optimized images for ones it flags as unoptimized. I haven’t really investigated the quality and usefulness of those images but they may help you save some optimization time.
Lazy loading images
One of the best ways I have found to silence the optimize images warning is by lazy loading your images. Lazy loading just means that an image doesn’t load until needed.
Technically, lazy loading images has no impact on image optimization but images found beneath the fold do not trigger the Google warning. Perhaps this is a bit of a cheat, but it is also a legitimate tool to use because it will delay loading images not needed by users that don’t scroll down your page.
You can use a specialized lazy loading plugin like a3 Lazy Load. Alternatively, some general optimization plugins also also offer this functionality.
Of course, lazy loading shouldn’t replace optimization. You should do both.
Serving responsive images
I covered this in the Properly size images section.
Serving images with correct dimensions
Serving images with correct dimensions is important but WordPress does this well so you shouldn’t need to worry it.
Inline images with Base64
Another suggestion I came across is to inline your images in your HTML. This is something I had never heard of doing and it has several drawbacks. Google cannot index these images, they cannot be browser cached and they cannot be linked or pinned. So, I never really investigated this, but if interested, this guide for embedding images might be helpful. There are multiple online tools to convert images to Base64 encoding, including Base64 Guru, Code Beautify, and Base64 Image.
Use CSS sprites
A common—but far from simple—suggestion is to combine background images in CSS Sprites. Doing this lets you load only one graphic file instead of multiple files.
Serve images in next-gen formats
Serve images in next-gen formats lists all images in older image formats, showing potential savings gained by serving WebP versions of those images.
As of this writing, WordPress doesn’t allow you to directly upload WebP images. There are, however, plugins you can use to convert existing and new images. These plugins can also add code to automatically use WebP when supported by the user’s browser.
Minify CSS & Minify JavaScript
Minification is the process of removing unnecessary white space, tabs, carriage returns, newlines, comments, and other characters from a file.
The Minify CSS and Minify JavaScript PageSpeed Insights (Lighthouse) audits list all unminified CSS/JS files, along with the potential savings in kilobytes (KB) when these files are minified. Estimates of potential savings are based on the comments and white space characters found in your files. This is a conservative estimate since minifiers can perform clever optimizations to further reduce your file size.
The easiest way to minify your files is to use a page speed optimization plugin. Most offer minify features but I have noticed that many have one of the following shortcomings.
- Sometimes the minification process doesn’t quite satisfy PageSpeed Insights. There are multiple minify approaches (and code functions) and it seems some are better than others.
- Many plugins minify CSS and JavaScript but not HTML. Unfortunately, my search for a plugin that would minify only HTML was fruitless.
- Some plugins do an especially good job minifying one type of file but not another. I found this to be the case with W3TC, for example. This may be fine if you are using more than one plugin that can do the job in partnership, but a single solution would be ideal.
Thus, be sure to test that your preferred plugin(s) work sufficiently well.
Something else to consider if using a plugin that minifies is that it may create one or more new files with the minified code. If not loaded asynchronously they will trigger a render blocking warning.
Many users activate a minify plugin and expect it to just work. This is simply wrong on many levels. Minifying is like a driving a Formula One car, it’s extremely fast but dangerous at the same time. If your site has more than 40 Javascript files, or a huge number of daily visitors, an on-the-fly minifying solution […] Not all files can be minified, i.e. they simply break when minified. Some are minifiable but break as soon as they are combined with other files. This is one of the main reasons why a JavaScript function does not work at all or your site look messed up when minifying is active. To resolve that, you need to identify what files are causing troubles. Problematic files can be detected by excluding files one by one in the minification list. — Khang Minh
I haven’t tested plugin minification recently, but Autoptimize does a good job for me. Alternatively, Cloudflare’s minify options work quite well.
If, for some reason, you still get minify warnings, you can always do things manually. The process is basically like this:
- Find the offending scripts listed by PageSpeed Insights.
- Create a copy of those scripts in your child theme.
- Use an online minify tool like HTML Minifier or Pretty Diff Minifier to tidy them up.
- Dequeue the original scripts.
- Enqueue the new minified version of those scripts.
Remove unused CSS
The Remove unused CSS audit fails when PageSpeed Insights (Lighthouse) detects that a page has at least 2 KB of CSS rules that weren’t used while rendering the above the fold content.
As already mentioned, you inline critical CSS to avoid failing the Eliminate render-blocking resources audit. This audit will tell you if you inlined too much CSS code.
Enable text compression
Compression means that the code for a page is compressed by the server and de-compressed by your browser. Since sending less data over the Internet is faster, the page should load more quickly.
The Enable text compression audit lists all text-based resources that aren’t compressed. PageSpeed Insights (Lighthouse) gathers all responses that:
- Have text-based resource types.
- Do not include a
content-encoding
header set tobr
,gzip
, ordeflate
.
PageSpeed Insights (Lighthouse) then compresses each of these with GZIP to compute the potential savings. If the original size of a response is less than 1.4KB, or if the potential compression savings is less than 10% of the original size, then it does not flag that response in the results.
Your web hosting company may already enable compression on its servers. If not, I already provided code to enable gzip compression in the .htaccess
file.
Note that there is a newer, more effective compression technique called Brotli. It’s pretty widely supported across browsers, but not universally yet. Your web host may already have enabled it on your server. Use KeyCDN’s Brotli Test page to verify. If not, ask your server if it is possible to enable it. Alternatively, if you use Cloudflare, you can enable Brotli in the Speed settings (it may be enabled by default).
Preconnect to required origins
Preconnect to required origins lists all key requests that aren’t yet prioritizing fetch requests with <link rel=preconnect>
.
Consider adding preconnect
or dns-prefetch
resource hints to establish early connections to important third-party origins.
preconnect
informs the browser that your page intends to establish a connection to another origin, and that you’d like the process to start as soon as possible.
Establishing connections often involves significant time in slow networks, particularly when it comes to secure connections, as it may involve DNS lookups, redirects, and several round trips to the final server that handles the user’s request.
Taking care of all this ahead of time can make your application feel much snappier to the user without negatively affecting the use of bandwidth. Most of the time in establishing a connection is spent waiting, rather than exchanging data.
Informing the browser of your intention is as simple as adding a link tag to your page:
1 | <link rel="preconnect" href="https://example.com"> |
This lets the browser know that the page intends to connect to example.com
and retrieve content from there.
Bear in mind that while <link rel="preconnect">
is pretty cheap, it can still take up valuable CPU time, particularly on secure connections. This is especially bad if the connection isn’t used within 10 seconds, as the browser closes it, wasting all of that early connection work.
In general, try to use <link rel="preload">
, as it’s a more comprehensive performance tweak, but do keep <link rel="preconnect">
in your toolbelt for the edge cases.
<link rel="dns-prefetch">
is another <link>
type related to connections. This handles the DNS lookup only, but it’s got wider browser support, so it may serve as a nice fallback. You use it the exact same way:
1 | <link rel="dns-prefetch" href="https://example.com"> |
Reduce server response times (TTFB)
Reduce server response times (TTFB) reports Time to First Byte. TTFT is the time that it takes for a user’s browser to receive the first byte of page content. This audit fails when that time is more than 600 ms, though a good score is less than 200ms.
A lot of factors can affect your server’s response time. Most of these depend on your hosting company, notably the server configuration and the traffic load it faces. That’s why it is important to use a good web hosting company, though your DNS provider can impact TTFB as well.
Things aren’t completely out of your control, however. By optimizing your site—as we are doing here—and especially by using a caching solution (server and plugin-based), you can decrease the response time.
A CDN can also improve TTFB, but note that the free Cloudflare can actually make it worse. That’s because Cloudflare has additional firewalls and other features that many CDN providers don’t have. These are nice to have but they do come at a speed cost. Note that utilizing the full page caching option can help.
For more information, “Server response time” by Patrick Sexton gives a very good big picture look at what affects server response time. He also discusses some things you can do to improve yours. Kinsta also has a good deep dive on the TTFB topic.
Avoid multiple page redirects
Avoid multiple page redirects (formerly called Landing Page Redirects) flags pages that have two or more redirects.
Redirects slow down your page load speed. When a browser requests a redirected resource the server usually returns an HTTP response like this:
1 2 | HTTP/1.1 301 Moved Permanently Location: /path/to/new/location |
The browser must then make another HTTP request at the new location to retrieve the resource. This additional trip across the network can delay the loading of the resource by hundreds of milliseconds.
It’s unlikely that you will have an issue with this audit. It the past, mobile devices were often redirected to a special m.domain.com URL. These days, responsive designs are the norm and such redirects are no longer necessary.
Even if you pass this audit, you may have one minor issue with redirects that can slightly affect page speed. That can happen when dealing with www
and HTTPS
. You may have redirects set up to handle this either via your control panel or with code in your .htaccess
file. But, those redirects may not be efficient.
For example, let’s say your site is set up to be https://www.domain.com
and someone types in http://domain.com
. If you don’t have your redirects properly configured you could get two redirects:
http://domain.com
-> http://www.domain.com
http://www.domain.com
-> https://www.domain.com
When what you really want is:
http://domain.com
-> https://www.domain.com
Putting the following code inside your .htaccess
file (at the top) should do the job:
1 2 3 4 5 6 7 8 9 10 11 | ##### START NON-WWW NON-DEV TO HTTPS WWW REDIRECT ##### RewriteEngine On RewriteCond %{HTTP_HOST} !^(dev|mail|www)\. [NC] RewriteRule ^(.*)$ https://www.%{HTTP_HOST}/$1 [R=301,L] ##### STOP NON-WWW TO WWW REDIRECT ##### ##### START FORCE HTTP WWW TO HTTPS WWW ##### RewriteEngine on RewriteCond %{HTTPS} off RewriteRule (.*) https://%{HTTP_HOST}%{REQUEST_URI} [R=301,L] ##### STOP FORCE SSL ##### |
Preload key requests
Preload key requests flags the third level of requests in your critical request chain as preload candidates.
I discussed how to preload web fonts in the Ensure text remains visible during webfont load section below. Apply the same approach for any resource this audit lists if it is needed for above the fold content. If not, defer loading the resource instead.
Use video formats for animated content
Use video formats for animated content lists all animated GIFs, along with estimated savings in seconds achieved by converting these GIFs to video.
Large GIFs are inefficient for delivering animated content. By converting large GIFs to videos, you can save big on users’ bandwidth. Consider using MPEG4/WebM videos for animations and PNG/WebP for static images instead of GIF to save network bytes.
Animated GIFs have three key traits that a video needs to replicate:
- They play automatically.
- They loop continuously (usually, but it is possible to prevent looping).
- They’re silent.
Luckily, you can recreate these behaviors using the <video>
element.
1 2 3 4 | <video autoplay loop muted playsinline> <source src="my-animation.webm" type="video/webm"> <source src="my-animation.mp4" type="video/mp4"> </video> |
Reduce the impact of third-party code
Reduce the impact of third-party code flags pages that have third-party scripts that blocks the main thread for 250 ms or longer.
A third-party script is any script hosted on a domain that’s different than the one you are testing. These are commonly needed to add an advertising network, social media button, A/B test, or analytics service to your page. These third-party scripts can significantly affect your page load performance.
I have shown above how you can localize certain third party scripts and that’s the best solution for dealing with this issue when it is feasible. But, sometimes it is not feasible. If so, you must decide between the functionality of the third-party script and the page speed hit it causes.
PageSpeed Insights Diagnostics
The Diagnostics section lists audits that do not directly affect your Performance score. Use them as additional guidance that you can explore to further improve your page speed performance.
Ensure text remains visible during webfont load
Ensure text remains visible during webfont load is the diagnostic PageSpeed Insights will highlight if any of your non-system fonts impact your First Contentful Paint metric. So, here we are with fonts again.
I have already discussed how to localize non-system fonts (web fonts) like those from Google. And, I have talked about how to use font-display: swap
to avoid flash of invisible text (FOIT). Both of these techniques are recommended but using them probably won’t prevent this PageSpeed Insights warning. That’s because web fonts are not loaded until the critical resources (CSS, JS) are downloaded, which leads to a flash of unstyled text (FOUT) problem.
Preload web fonts
To solve this problem, you can preload web fonts that are required immediately (i.e., above the fold content). Normally, you would do this with the Link
element in the head of the document. For example:
1 | <link rel="preload" href="/assets/Pacifico-Bold.woff2" as="font" type="font/woff2" crossorigin> |
The as="font" type="font/woff2"
attributes tell the browser to download this resource as a font and helps in prioritization of the resource queue. The crossorigin
attribute indicates the font may come from a different domain. Without this attribute, the browser ignores the preloaded font.
While that seems easy, we are dealing with WordPress so we don’t want to modify the theme header file directly. Instead, we should use the wp_head
hook. In my case, I use Google’s Oswald web font and Font Awesome for icons in my navigation menu, which is above the fold. So, I want to preload those.
1 2 3 4 5 6 7 8 | function add_fonts_preload() { ?> <link rel="preload" href="<?php echo get_site_url(); ?>/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff2" as="font" crossorigin /> <link rel="preload" href="<?php echo get_site_url(); ?>/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2" as="font" crossorigin /> <link rel="preload" href="<?php echo get_site_url(); ?>/wp-content/themes/mytheme/fonts/font-awesome-4.7.0/fonts/fontawesome-webfont.woff" as="font" crossorigin /> <?php } add_action( 'wp_head', 'add_fonts_preload', 1 ); |
While that takes care of preloading the font files, we still could have FOUT or render blocking because the relevant CSS is in our child theme stylesheet. We can go ahead and move that to the head in the same function we just wrote. Just add the following lines:
1 2 3 4 5 6 7 8 9 | <style type='text/css'> <?php // --- Google Oswald echo '@font-face {font-family: "Oswald";src: local("Oswald"),url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff) format("woff");font-weight:400;font-style: normal;font-display: swap;}'; echo '@font-face {font-family: "Oswald";src: local("Oswald"),url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2) format("woff2");font-weight:400;font-style: normal;font-display: swap;}'; // --- Font Awesome echo '@font-face {font-family: "FontAwesome";src: url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/font-awesome-4.7.0/fonts/fontawesome-webfont.woff) format("woff");font-weight:normal;font-style:normal;font-display:block;}'; ?> </style> |
Putting it together, we have the following:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 | function add_fonts_preload() { ?> <link rel="preload" href="<?php echo get_site_url(); ?>/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff2" as="font" crossorigin /> <link rel="preload" href="<?php echo get_site_url(); ?>/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2" as="font" crossorigin /> <style type='text/css'> <?php // --- Google Oswald echo '@font-face {font-family: "Oswald";src: local("Oswald"),url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUhiZTaR.woff) format("woff");font-weight:400;font-style: normal;font-display: swap;}'; echo '@font-face {font-family: "Oswald";src: local("Oswald"),url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/Oswald/TK3_WkUHHAIjg75cFRf3bXL8LICs1_FvsUZiZQ.woff2) format("woff2");font-weight:400;font-style: normal;font-display: swap;}'; // --- Font Awesome echo '@font-face {font-family: "FontAwesome";src: url( ' . get_site_url() . '/wp-content/themes/mytheme/fonts/font-awesome-4.7.0/fonts/fontawesome-webfont.woff) format("woff");font-weight:normal;font-style:normal;font-display:block;}'; ?> </style> <?php } add_action( 'wp_head', 'add_fonts_preload', 1 ); |
Finally, note that preload can harm performance by making unnecessary requests for unused resources. So, be sure to only preload fonts used in above the fold content. In my case, I am preloading just the .woff
files for my Google font (25 KB) and Font Awesome (96 KB). That shouldn’t be too bad.
Avoid chaining critical requests
Critical request chains has to do with the Critical Rendering Path (CRP) and how browsers load your pages. Certain elements must be loaded completely before your page becomes visible. Thus, this PageSpeed Insights audit serves as a proxy for identifying render-blocking critical resources. The greater the length of the chains and the larger the download sizes, the more significant the impact on page load performance.
The avoid chaining critical requests diagnostic will show you the request chains on the page you’re analyzing. It will show you the series of requests that must be fulfilled before your page becomes visible. It will also tell you the size of each resource. Ideally, you want to minimize the number of dependent requests, as well as their sizes.
The following are the primary ways to improve the results of this diagnostic:
- Minimize the number of critical resources: eliminate them, defer their download, mark them as
async
, and so on. - Optimize the number of critical bytes to reduce the download time (number of round trips).
- Optimize the order in which the remaining critical resources are loaded: download all critical assets as early as possible to shorten the critical path length.
Note that there’s no specific number of critical request chains that you need to achieve. PageSpeed Insights doesn’t count this audit as passed or failed. It is simply made available to help you improve page loading times.
Kinsta has written a deep dive on how to optimize the critical rendering path in WordPress.
Avoid enormous network payloads
Avoid enormous network payloads shows the total size in kilobytes of all resources requested by your page. The largest requests are presented first.
Based on HTTP Archive data, the median network payload is between 1,700 and 1,900 KB. To help surface the highest payloads, Lighthouse flags pages whose total network requests exceed 5,000 KB.
Aim to keep your total byte size below 1,600 KB. This target is based on the amount of data that can be theoretically downloaded on a 3G connection while still achieving a Time to Interactive of 10 seconds or less.
Here are some ways to keep payload size down:
- Defer requests until they’re needed.
- Optimize requests to be as small as possible. Possible techniques include:
- Minify and compress network payloads.
- Use WebP instead of JPEG or PNG for your images.
- Set the compression level of JPEG images to 85.
- Cache requests so that the page doesn’t re-download the resources on repeat visits.
Serve static assets with an efficient cache policy
There are two types of caching.
Server caching
Normally WordPress builds a page by assembling various pieces of the page (content, the theme design, external resources and images, plugin-specific functions, etc.). This requires multiple calls to the database.
Specialized server software and/or a plugin can cache these dynamically generated pages and serve them directly to visitors instead.
Server caching significantly lowers the loading time and the stress on your server. This single tool will often have more impact than any other. There are many caching plugins to choose from. Some are extremely simple and some quite complicated. Some offer extra features, such as minification and pre-caching pages. And some conflict with—or require special configuration to play nice with—popular plugins, especially membership and e-commerce solutions.
Browser caching
Browser caching is what it sounds like. Resources like images, fonts, media, CSS, and JavaScript load only once and then the browser (local computer) cache stores them. The next time one of those resources is needed it will be pulled from the user’s computer rather than downloaded from the server, thus greatly improving load speed.
To make browser caching work, you must configure your server to tell the browser how long it should store each type of resource. This is called Time to Live (TTL).
Browser caching is usually a good idea with little downside. Understand, however, if you change a cached resource, the user will not see the new version until the cache expires. Thus, you should set cache expiration times according to how frequently you plan to change a resource. For most blogs, HTML changes are frequent but images, CSS and JavaScript are not. So set a short expiration time for HTML and a long expiration time for the other resources.
In Step 6 above, I provided the code to enable browser caching so you should have already modified the .htaccess
file. Note that the sample code sets the TTL to 1 month + 1 day but you can make that longer if you prefer.
PageSpeed Insights Audit
According to PageSpeed Insights, Serve static assets with an efficient cache policy flags all static resources that aren’t cached. However, the sample image shown lists every resource with a Cache TTL of 30 days. Thus, I think it’s the case that this audit flags all resources not cached more than 30 days.
When a page fails the audit, PageSpeed Insights (Lighthouse) lists the results in a table with three columns:
- URL: The location of the cacheable resource
- Cache TTL: The current cache duration of the resource (TTL = Time to Live)
- Size: An estimate of the data your users would save if the flagged resource had been cached
Audit failures from third-party resources
If you used my code to set browser caching, your site’s resources should not fail this audit. But, you may still get failures from resources hosted on third-party servers, especially Google Analytics, Google AdSense, and Gravatars. That’s because the server that actually hosts the resources sets these expiration times. You cannot set or override another server’s settings.
As I mentioned, you should consider how often a resource might change in setting the cache policy. Well, it seems that Google determined that Analytics and AdSense code might change frequently so they set a short TTL. I wouldn’t think Gravatars change frequently but their TTL is short anyway.
So, what can we do to rid ourselves of these warnings? We can use a local copy of the required files instead of downloading them from the original server. I explained how to do this in the Localize External Scripts section. I also discuss how to deal with Gravatars later.
You can localize the
adsbygoogle.js
file which will prevent it from failing this audit. Unfortunately, somewhere in the AdSense JavaScript code, one or more of the following files get loaded: osd.js
, expansion_embed.js
, templates.js
, and show_ads_impl.js
. While these thankfully won’t trigger a render blocking warning they will trigger this audit. So, if you want to use AdSense you simply have to accept the page speed hit.Avoid an excessive DOM size
Avoid an excessive DOM size reports the total Document Object Model (DOM) elements for a page, the page’s maximum DOM depth, and its maximum child elements.
PageSpeed Insights (Lighthouse) flags pages with DOM trees with:
- More than 1,500 nodes total.
- A depth greater than 32 nodes.
- A parent node with more than 60 child nodes.
DOM is complicated and you may be hard-pressed to address any of the failures flagged by this audit. Common culprits are JavaScript files and often those will be used by your plugins. So, one place to investigate are the plugins you are using. If you find any that are causing problems, reach out to the developers and see if they can make improvements.
Another common culprit is your WordPress theme. Heavy themes can add large volumes of elements to the DOM. If this is the case, you may need to switch themes.
User Timing marks and measures
The User Timing API gives you a way to measure JavaScript performance. Unless you are a developer it’s unlikely you will use it, but one of your plugins might. If so, User Timing marks and measures will show you the results.
This audit is not a pass or fail test. It’s just an opportunity to discover a useful API that can help you measure your app’s performance.
Reduce JavaScript execution time
Reduce JavaScript execution time shows a warning when JavaScript execution takes longer than 2 seconds. The audit fails when execution takes longer than 3.5 seconds.
To help you identify the biggest contributors to execution time, Lighthouse reports the time spent executing, evaluating, and parsing each JavaScript file that your page loads.
We have discussed various tips that will address this issue, but summarizing:
- Only send the code that your users need.
- Minify and compress your code.
- Remove unused code.
- Reduce network trips by caching.
Minimize main thread work
The browser’s renderer process is what turns your code into a web page that your users can interact with. By default, the main thread of the renderer process typically handles most code: it parses the HTML and builds the DOM, parses the CSS and applies the specified styles, and parses, evaluates, and executes the JavaScript.
The main thread also processes user events. So, any time the main thread is busy doing something else, your web page may not respond to user interactions, leading to a bad experience.
Minimize main thread work flags pages that keep the main thread busy for longer than 4 seconds during load. To help you identify the sources of main thread load, Lighthouse shows a breakdown of where CPU time was spent while the browser loaded your page.
Many activities, like parsing CSS and laying out the page, can keep the main thread busy. However, parsing, compiling, and executing JavaScript is often the biggest source of work on the main thread. So, apply all the JavaScript related optimization advice to improve this audit as well.
Keep request counts low and transfer sizes small
Keep request counts low and transfer sizes small reports how many network requests were made and how much data was transferred while your page loaded.
The Requests and Transfer Size values for the Total row are computed by adding the values for the Image, Script, Font, Stylesheet, Other, Document, and Media rows.
The Third-party column does not factor into the Total row’s values. Its purpose is to make you aware of how many of the total requests and how much of the total transfer size came from third-party domains. The third-party requests could be a combination of any of the other resource types.
The effect of high resource counts or large transfer sizes on load performance depends on the type of resource being requested.
Requests for CSS and JavaScript files are render-blocking by default. Requests for images aren’t render-blocking like CSS and JavaScript, but they can still negatively affect load performance. Inefficient loading of fonts can cause invisible text during the page load. If your HTML file is large, the browser has to spend more time parsing the HTML and constructing the DOM tree from the parsed HTML.
Useful WordPress Optimization Plugins, Tools and Services
Here are some notes regarding some of the plugins I tried. Note that just because they behaved a certain way for me on my site, that doesn’t mean the same will be true for you. In fact, I wouldn’t be surprised if some of my observations were a result of me misconfiguring something. Still, I think some of the observations are worth sharing so here goes:
a3 Lazy Load
a3 Lazy Load is a fully featured, easy to set up lazy load plugin for WordPress. Use the admin settings to easily define what elements are lazy loaded and when they become visible in the users browser. Note that many other optimization plugins include a lazy load feature so you probably won’t need this one but I list it just in case.
Autoptimize (AO)
Autoptimize is my preferred optimization plugin for maximizing a PageSpeed Insights score. When I first wrote this article five years ago, I couldn’t quite get to a 100 score using it. That was mostly because the consolidated CSS file it creates causes render blocking warnings. However, Autoptimize has steadily improved and there is now a solution to that issue (keep reading).
If you are not interested in Autoptimize, skip this entire section. Otherwise, read on for my experiences and tips for using it effectively.
First off, what does Autoptimize actually do? From the plugin FAQ page:
It can aggregate, minify and cache scripts and styles, and injects CSS in the page head by default. It can also inline critical CSS and defer the aggregated full CSS, move and defer scripts to the footer and minify HTML. You can optimize and lazy-load images, optimize Google Fonts, async non-aggregated JavaScript, remove WordPress core emoji cruft and more.
If you decide to use Autoptimize, I recommend first reading “So how does Autoptimize work anyway?,” the FAQ and “How to make Autoptimize (even) faster.” When you are ready to configure AO, consider the following things.
Minification
Note that some plugins add JavaScript and/or CSS directly to your page rather than referencing an external file. Typically, these same plugins don’t bother minifying the code they add. This can cause a problem if you choose not to aggregate inline JS or inline CSS in the Autoptimize options, as AO will not minify that code. If there is enough of it, you could find yourself with a PageSpeed Insights minify warning.
Aggregate Inline CSS/JS?
As I just mentioned, some plugins inject CSS and/or JavaScript directly into your page’s HTML. If you enable the aggregate inline CSS and JavaScript options in AO, you should pass all PageSpeed Insights minify tests. But, you also expose yourself to a significant risk, which is why these options are not enabled by default. Again, quoting from the FAQ:
Before Autoptimize 2.0.0, inline code was always optimized with all CSS pushed in the head-section and all JS at the end with a defer-flag. This often caused 2 problems; the priority of inline CSS got lost and inline JS could contain page- or request-specific code which broke Autoptimize’s caching mechanism leading to too many cached files and the minification running over and over. This is why as from AO 2.0 by default inline code is not optimized (except for those upgrading from previous versions). Additionally, to avoid inline JS breaking because jQuery is not available, js/jquery/jquery.js is excluded by default.
As with everything, the answer is not to “set it and forget it” but rather to test different configurations. It may be that your choice of plugins and theme don’t have much or any inline CSS and JS code so that you neither need to aggregate nor worry about excluding jQuery by default. Or, perhaps there is inline code but aggregating it doesn’t affect your site’s functionality. Or perhaps you can get away with aggregating one and not the other (CSS/JS). This is where using an XAMPP development site setup can really be helpful.
Cache Size Warning
Autoptimize caches the aggregated files it creates. You can manually purge these files but there is no automated system for doing so. If you configure the plugin properly, multiple pages on your site should end up using the same AO cached files. Thus, the overall size of the cache should not become a problem.
On the other hand, if you don’t get the configuration right, it is possible for AO to create new files for every page or even for the same page each time someone views it. How can that be? Well, it usually comes down to the previous discussion of inline CSS and JS. As I mentioned, some plugins (and, possibly even some themes) inject CSS or JS to achieve some functionality. Often the reason they do this (rather than just including an external file) is because something about that code needs to be customized to the circumstances encountered.
For example, I use the WordPress Popular Posts plugin. It offers a data sampling feature that allows the plugin to track only a subset of your traffic and report on the tendencies detected in that sample set. This is very useful for high traffic sites to reduce server load that the plugin might otherwise cause.
The problem with this is that the plugin uses a random number to customize its sampling code. This leads to pages loading with slight JS code differences each time someone views them. Autoptimize names the files it creates based on a calculation of the code in the page (JS code for JS files, CSS code for CSS files). Thus, even a single letter’s difference in that code will lead to AO creating a new file. You can imagine how quickly the number of files created will balloon.
Once the cache size surpasses a certain limit the normally green circle in your admin header bar will flash red. If you ever get to this stage, you need to do one of two things. The simplest is to disable the aggregate inline option (either for CSS, JS or both, but most likely JS). The better solution is to find the offending code and exclude it from Autoptimize. How can you do this?
Plugin author Frank Goossen offers a method, but essentially you want to view the source code for two different posts. Then do a search for .js
files. You should see one Autoptimize generated file (something similar to autoptimize_0b24c156e7b70633ae3a0ac2049078a0.js
) for each page. If these are the same on both pages, that is a very good sign. If not, there are probably some differences that you need to investigate. To do so, open each .js
file in an editor that has a compare function (I use Notepad++). To make it a bit easier, since there will be long lines of text, do a find and replace for each semicolon (;
) and replace with a semicolon followed by a line feed (;\n
) (you could use the }
symbol instead of a semicolon).
How to do this will depend on your text editor. For Notepad++ you will use the replace function but will need to make sure you check the Extended (\n, \r, \t, \0, \x…)
option in the Search Mode box. Now run your compare function (found under the Plugins menu item in Notepad++). Then scroll through the comparison screen until you find the differences. Hopefully, something obvious will jump out at you as causing the difference (a post_id, a token, etc.). Find some variable in that code that is unique and common to both files and add that to the exclude code in the AO settings. Save, and repeat the whole process. Keep doing this until the AO external JS file for both pages is the same.
jQuery Dependencies
Autoptimize excludes the jQuery script by default, which means you are going to get a render blocking warning if you use the default AO setup. Why does it do this?
Again, we have inline code to thank. When plugins and themes correctly enqueue scripts that rely on jQuery there is a way to specify that dependency. AO respects this in its aggregation algorithm. If your theme and plugins use only external scripts, AO will aggregate these in a way that jQuery comes first. Thus, your site will probably function without any problems.
But, what if a plugin injects some JS code into your page and that code relies on jQuery? If you have now deferred the loading of all external JS files, including jQuery files, when your page loads, it is going to try to run the inline JS code before it has a chance to load jQuery. Whatever functionality the inline code was trying to achieve will fail. One solution to this is to use the aggregate inline JS option. If you do, keep in mind my earlier discussion of cache size.
Again, you really should test before enabling AO on a live site. When it comes to JavaScript issues, it is a good idea to use the Developer Tools feature in Chrome (other browsers have similar tools). These tools will quickly show you any JS warnings or errors. Most commonly, they result from something like jQuery not loading early enough.
Render Blocking Warning with Autoptimize
The way Autoptimize handles JavaScript code by default means you shouldn’t get any JS render blocking warning on your PageSpeed Insights test. CSS is another story. Of course, this isn’t a situation unique to AO; it is very difficult to beat the render blocking blues when it comes to CSS. Still, there are three ways around this.
- Inline all of your CSS. This is not generally recommended because, for most themes, that will be a lot of code. Usually, inlining all CSS will improve your PageSpeed Insights score. But, if doing so makes the size of your page large enough, you may fail other page speed tests. Regardless of effect on your score, inlining all CSS may negatively affect your site performance. Why? Because every visit to a page on your site will require downloading the entire CSS again. In contrast, an external CSS file can be cached by the browser so it only needs to be downloaded once. I actually do inline all my CSS, but that’s because my theme doesn’t have a lot of CSS. Frank Goossen offers more information on inlining as well.
- Inline and defer. If you don’t want to inline all CSS, AO offers the option to just inline the critical CSS needed to render “above the fold” content and defer loading the rest as a single optimized CSS file. Unfortunately, determining what CSS is needed for above the fold content is not straightforward. It depends on where the fold is, which in turn depends on screen size. Things are further complicated by the fact that different pages on your site quite likely have different above the fold CSS requirements (home page vs. search results vs. category listings vs. individual posts).
If you want to give it a go, Chris Hooper offers an excellent Autoptimize inline defer CSS tutorial which highlights Jonas Ohlsson’s useful Critical Path CSS Generator tool. The AO FAQ also links to other potentially useful tools. Do note that you probably should strip out your render-blocking inline CSS from your corestyle.css
file, but if you decide to stop using AO or this AO feature, you will then have to remember to add back what you took out. Since you are deferring the loading of that “duplicate” CSS, it might not be worth the effort. - Defer only. Instead of trying to figure out which CSS is critical for above the fold rendering, you can use the AO inline and defer option but just leave the inline box empty. This is a bit of a cheat to appease the PageSpeed Insights god, but be aware it will lead to a flash of unstyled content (FOUC) because your site will load text before any CSS, which is why you probably shouldn’t do this.
Note: Dave Mankoff provides the clearest description I have read about the prioritize visible content rule:
The easiest way to think about this rule is this: Find all of the HTML elements (div
, img
, form
, input
, span
, table
, iframe
etc.) that are visible above the fold when your page is fully loaded. Make sure that your inlined CSS specifies the necessary information to figure out the size of those elements. This might include widths, heights, font sizes, font weights, floats, displays, etc. This includes content as well – if you have a p
tag that starts empty and then gets filled in via AJAX, that causes a problem because it will cause the size of the p
tag to change.
Generate data: URIs for images?
One great Autoptimize feature is the ability to replace small images with Base64 equivalents. This is the “generate data: URIs for images” configuration option. Doing this will prevent sprite warnings in page speed tests, which lazy loading should also prevent (unless they are above the fold). But, while lazy loading will help you get a higher score, Base64 will actually reduce the number of HTTP requests and thus improve real speed. If you cannot use this feature or if you have images that Autoptimize doesn’t encode you can create them yourself. One good free online tool is Base 64 Encoder.
This can also help when image dimensions aren’t specified. For example, I use a 1×1 transparent gif in my lazy load code, but it isn’t helpful to set the 1×1 dimensions because it is just a placeholder for the much larger image to be loaded. But, not specifying the width and height produces a page speed warning. Instead, I just replaced the reference to the image with its Base64 equivalent (which, for reference, looks like data:image/png;base64,R0lGODlhAQABAIAAAAAAAP///yH5BAEAAAAALAAAAAABAAEAAAIBRAA7
) and the warning disappeared.
404 Pages
A minor issue with Autoptimize is that it might lead to a lot of 404 pages, though you can change from 404 pages to 410 instead. In this interesting look at 404 and 410,
4xx status codes mean that there was probably an error in the request, which prevented the server from being able to process it. Specifically, a 410 error means ‘gone.’ In Google’s terms. ‘The server returns this response when the requested resource has been permanently removed. It is similar to a 404 (Not found) code, but is sometimes used in the place of a 404 for resources that used to exist but no longer do.’ […] According to Google, when a page issues a 404 header response, it may sometimes still revisit the page to make sure it’s really invalid. Using the 410 response code will at least ensure that Google won’t ever go back to that page again and get to the more important pages on the site, helping overall crawlability.
Update: a recent version of Autoptimize offers an “enable 404 fallbacks” option to fix this issue.
Async JavaScript
Async JavaScript is brought to you by the developer of Autoptimize. It gives you full control of which scripts to add an ‘async’ or ‘defer’ attribute to or to exclude to help increase the performance of your WordPress website. I am not really sure why this functionality isn’t just included in Autoptimize, but if you use it, you should probably use this with it.
Cache Enabler
Cache Enabler is a free and open-source caching plugin from KeyCDN. The disk caching engine is quite fast and reliable, while the WordPress multisite support is an advantage for those with networks of websites.
Clearfy
Clearfy is a new plugin for me and one I haven’t tried. It claims to offer more than 50 features for WordPress optimization. These relate to code cleanup, SEO, duplicate pages, security, widgets, updates, comments, Heartbeat, admin bar and notices, posts, scripts/styles, and privacy.
Complete Analytics Optimization Suite (CAOS)
I’ve discussed how localizing external scripts can improve page speed. Google Analytics is one of the best candidates for localization. After my initial work to do it myself, I discovered the Complete Analytics Optimization Suite plugin. This plugin enables you to completely optimize the usage of Google Analytics by locally hosting the GA JavaScript file and keeping it updated using wp_cron(). You can also easily anonymize the IP address of your visitors, set an Adjusted Bounce Rate and choose whether to load the tracking code in the header or footer.
Cloudflare
I have been on the fence about Cloudflare for a long time. It’s definitely a great, free service you should consider but there are a few things to think about first.
Features
Even free accounts offer some great features, the two most notable being a Content Delivery Network (CDN) and a traffic filtering firewall. The first can really improve your site performance across the globe and the second can help reduce your server load and prevent malicious attacks. There are also other performance enhancement features, like minification and Rocket Loader, though be careful with those, especially if you are already using one or more page speed optimization plugins or tools.
Speed
The CDN functionality of Cloudflare can enhance your site performance, especially for people visiting your site from a location geographically distant from your server. But, there is also a speed penalty to pay since Cloudflare acts as a reverse proxy, essentially acting as a middle man between visitors to your site and your site’s server. Typically, this performance hit will be most noticeable in the server response time (TTFB) since that is something your PageSpeed Insights score will probably penalize.
To be fair, there is somewhat hot debated about this issue. Cloudflare even offers a compelling reason why you should stop worrying about Time To First Byte (TTFB). One of the commenters in that article points out that the issue is important for SEO so I am not sure which issue is the most important. I should also point out that Cloudflare offers a feature called Railgun, which can significantly alleviate this connection speed issue, but Railgun is a premium feature not available on the free plans.
SSL
In the past, Cloudflare’s free account did not offer the Full and Full Strict SSL modes, but now they do, which is great. Full mode is for self-signed certificates, which are not recommended (but are better than nothing). Full Strict is for proper certificates, including the free ones from Let’s Encrypt. So, in terms of free functionality, things are looking good for using Let’s Encrypt with a free Cloudflare plan.
It should be mentioned that the free plan does not support a custom SSL certificate. What that means is that when someone visits your site, the certificate they will see in their browser is the shared UniversalSSL certificate from Cloudflare, not your SSL certificate.
The potential problem with this is that dozens or even hundreds of different websites will share that certificate. The connection is secure but anyone who inspects your SSL certificate will see all those shared domain names that have no relation to your site. If you are already using a free Cloudflare account with SSL, you can check this yourself in the browser or you can use the SSL Checker tool. Some people consider this a big deal and others, including myself, don’t.
Railgun
I briefly mentioned the premium Railgun feature. Normally it is quite expensive. But, a small number of hosting companies actually offer Railgun for free. The ones I am aware of are A2, ChemiCloud, HydronHosting, MDD Hosting, NameHero, and SiteGround. If you are want to use Cloudflare and are willing to switch your current provider, consider one of these.
Using such a host makes setup and management incredibly easy. With other hosts, you typically need to visit the Cloudflare site and use the setup tool to detect your server’s DNS settings. Then you need to go to your domain name registrar and manually change from your current name servers to Cloudflare’s. By partnering with Cloudflare to make these steps unnecessary you an do everything in a couple of clicks from your control panel with no name server change required. This means it is very easy to turn it on and off if you want to test its effectiveness. You can also change the core settings, purge the cache and put your site in development mode from your control panel.
Miscellaneous Cloudflare Issues
If you use Cloudflare with an optimization plugin that also offers a minification feature, don’t enable it on both. Likewise check to see if the Rocket Loader feature plays well with your preferred optimization plugins.
The Scrape Shield feature can protect your site from content scraping. To do so, Cloudflare will add a small image to your pages (that’s how it detects stolen content across the Web). Unfortunately, this will trigger the Serve static assets with an efficient cache policy audit.
Code Snippets
Code Snippets is an easy, clean and simple way to run PHP code snippets on your site. It removes the need to add custom snippets to your theme’s functions.php
file.
Code Snippets provides graphical interface, similar to the Plugins menu, for managing snippets. Snippets can can be activated and deactivated, just like plugins. The snippet editor includes fields for a name, a visual editor-enabled description, tags to allow you to categorize snippets, and a full-featured code editor.
Disable Emojis
WordPress has default support for emojis, which generates an additional HTTP request on your site to load the wp-emoji-release.min.js file. And this loads on every single page. Disable Emojis lets you keep emoticons in all browsers and and emojis in browsers which support them. It simply removes the extra code bloat used to add support for emoji’s in older browsers.
Disqus Conditional Load
If you use Disqus, it’s critical that you lazy load Disqus comments. The Disqus Conditional Load plugin will do that.
Fast Velocity Minify
Fast Velocity Minify is a plugin I didn’t know about when I wrote my original optimization article. I have not tried it, but it seems to be quite similar to Autoptimize. I am happy with Autoptimize so I will stick with it, but if you are still considering different optimization plugins, it might be worth a try.
Host Everything Local (HELL)
HELL lets you remove any script and/or style loaded by any theme or plugin from its <head>
and (optionally) replace it with a local copy if it’s hosted on another domain. Besides that you can increase performance for all other resources by using dns-prefetch
, preconnect
and preload
. This plugin doesn’t have many users yet but it is by the same guy that made CAOS and OMGF so it should be solid.
Hyper Cache
Hyper Cache is purely PHP and works on every blog: no complex configurations are needed and when you deactivate it no stale settings are left around. It even creates separate caches for desktop and mobile devices.
I am not sure this plugin actually worked properly for me since I didn’t see any improvement in server response time. Since it seems to offer nothing but pure caching and there are other plugins I like for that, I didn’t bother investigating further.
JCH Optimize
JCH Optimize implements front end page speed optimizations. Features include: Page cache; Combine and minify javascript and CSS files HTML minification; GZip compress the combined files; Generate sprite to combine background images; Ability to exclude files from combining to resolve conflicts; Defer/Load combined javascript file asynchronously; Optimize CSS delivery to eliminate render blocking; CDN/Cookie-less Domain support; and Lazy load images.
Lazy Load for Comments
Lazy Load for Comments pretty much does what it says. Simply install and under the Discussion settings, there are two options. By default, it is set to “On Scroll” which is what most people will probably prefer. You can also set it to “On Click” which will created a button for visitors to click on before the comments load.
LiteSpeed Cache
LiteSpeed Cache is a popular (1+ million installs) caching and optimization plugin. I just started using it since my new host has LiteSpeed servers and it will only cache for those servers. However, even if you don’t have a LiteSpeed server, the optimization features are the most comprehensive I have seen. That means LiteSpeed Cache could be a good alternative to Autoptimize or one or more other plugins listed here. I have been happy with my AO setup so I haven’t tried the LiteSpeed Cache options but it’s on my to-do list.
loadCSS
loadCSS is an ansychronous CSS loader project. Ryan Fitton has a useful post about using it with WordPress, but it is now used by Autoptimize as well, so I recommend that as a better option. I am not sure which other optimization plugins use it, if any.
OMGF (Optimize My Google Font)
OMGF (Optimize My Google Font) uses the Google Fonts Helper API to automagically download the fonts you want to WordPress’ contents folder and generate a stylesheet for it. The stylesheet is automatically included to your site’s header and 100% compatible with CSS and JS optimization plugins like Autoptimize or W3 Total Cache. OMGF can efficiently remove any requests to external Google Fonts. You can also enable Typekit’s Web Font Loader to load your fonts asynchronously and further increase your Pagespeed Insights score. There is also an option to preload the entire stylesheet or just fonts loaded above-the-fold.
PageSpeed Ninja
PageSpeed Ninja is another plugin that seems promising, though I haven’t tried it myself. It offers lots of common optimizations (lazy loading, minification, leverage browser caching, GZIP compression, JS/CSS combining, defer loading, etc.). It also claims to be able to auto-generate above-the-fold critical CSS which, if true, is a very cool thing. The admin panel UI also seems quite slick. I would love to get some feedback if anyone gives it a try.
Pre* Party Resource Hints
I talked about the usefulness of preloading and preconnecting third-party resources in the Preload key requests section. Pre* Party Resource Hints is a useful plugin to help you do that. Install it, visit your site, and then go to the plugin page to find preconnect and preload suggestions. Choose the ones you want and it will do the hard work for you.
Query Monitor
Query Monitor is an incredibly useful debugging plugin for anyone developing with WordPress. It has some advanced features not available in other debugging plugins, but I find it most useful for seeing which scripts (both JS and CSS) any given page uses. It is also useful for seeing what kinds of database queries are being made. This is not a plugin you will need very often, but when you do it is a real gem.
Speed Booster Pack
Speed Booster Pack was once my favorite optimization plugin. I switched to Autoptimize for various reasons, but not because I didn’t like this plugin. In fact, it offers a lot of features and is very flexible about tweaking the options. You do, however, have to invest a small amount of time to learn exactly how.
Take the “move scripts to the footer” feature as an example. You can exclude up to four scripts, which can be useful for “above the fold” render blocking problems. But, you need to enter both a handle and the script to do so. And, the script should be the complete line of code not just the URL. In fact, that field should really be labeled something like “code to implement” because it will insert whatever you put there into your page code where the original script would have been. This is actually great, because you could leave it empty and effectively dequeue a handle. Or you could reference a localized copy instead of an external file or even add an async
option if you prefer that to the deferred loading feature the plugin offers.
The main feature I would like to see added to this plugin is solid minification.
Swift Performance
Swift Performance is a premium caching and optimization plugin with some interesting features. Most notably, it lets you create critical CSS on the fly for your pages. Its proxy caching and CDN support are also potentially useful. Together, those features might make it worth the money, though I haven’t tried it myself.
W3 Total Cache (W3TC)
I previously used W3TC and it is immensely popular. I still regard it highly, but I think it is only worth using if you really take the time needed to learn its ins and outs (e.g., use manual minify rather than automatic and specify every script to minify in correct order, in the proper location and deferred or asynced).
W3TC is great for minifying JavaScript and HTML but if you use it to minify CSS PageSpeed Insights will show a render blocking warning. Thus, you should choose another plugin for that function (there is an inline CSS option, but I presume that minifies any CSS already inline rather than making any minified CSS inline). Many say W3TC and Autoptimize make a good pairing.
I am a fan of W3 Total Cache. I even wrote an article about using W3TC to speed up your website (though I think Baruch Youssin has written a better tutorial). But, that was before I decided to really learn more about WordPress. After doing so, I decided to challenge my assumption that W3TC was the best tool for the job. In my particular circumstances I realized it is not, but I still think it is a great tool and I encourage you to include it as one of the options you test for yourself.
WP Fastest Cache
I read good things about WP Fastest Cache but some key features (e.g., “create cache for mobile,” “minify JS,” “enhanced minify HTML,” “enhanced minify CSS”) are premium only. Honestly, there are some good free caching plugins so you have to offer something really great to justify a $40 premium version and this isn’t it.
WP Rocket
Everything I read about WP Rocket is very positive, but as a premium product I did not personally give it a try. I suspect one reason it performs so well is that it is doing more than just caching; it also offers lazy loading, pre-caching (a nice feature I would like to have), and minification.
WP Super Cache
WP Super Cache is the most popular caching plugin, and is made by the people behind WordPress itself. It is free and provides a nice balance of features, performance and usability. This is my go-to caching plugin.
Other Changes and Tips for Acing PageSpeed Insights
I think I have covered all the essential tools and techniques to get great speed and performance from your WordPress site. Still, here are a few more things I have come across that might also be useful.
Comments
A busy comments section can add a significant load the webserver whether you’re using WordPress comments or a third-party system. As such, they can adversely impact the speed of your website and thus are a candidate for optimization.
One simple solution is to simply limit the number of comments displayed. You can do so from Settings > Discussion in the WordPress admin area. Look for the Other comment settings section. Select the checkbox next to Break comments into pages with and add a value for the number of comments you want to display with the initial page load.
My preferred solution, with or without limiting the number displayed, is to lazy load comments. This solution can help with both readability and speed performance. The previously mentioned Lazy Load for Comments plugin is a good option. Alternatively, you could get lazy loading plus many other useful features by replacing your default WP comments setup with the wpDiscuz plugin, though I haven’t tried it.
I should note that there are several show/hide post comments plugins that seem helpful for page speed. Unfortunately, most of these just toggle the CSS to hide and then display the comments, which load as normal. So, all the text and Gravatars are still there—you just can’t see them until you toggle them on.
Kinsta has a good article about optimizing comments with other tips worth considering.
Gravatars
WordPress uses Gravatars (Globally Recognizable Avatar) for the commenting system. These are well-optimized so they won’t cause image optimization warnings. Unfortunately, their browser caching is only 5 minutes, which fails the Serve static assets with an efficient cache policy audit. Gravatars also require multiple HTTP requests to the server that stores the Gravatar images. The more commenters you have with a registered Gravatar, the bigger this problem will be.
The way I see it, there are three ways to address the Gravatar performance issues.
Don’t Use Them
The simplest solution is to simply choose not to display avatars at all. Do this in the Settings > Discussion section of your WP admin panel.
Localize Gravatars
You can choose to use localized copies of your commenters’ Gravatars. As with so many things, the easiest option is to use a plugin.
Perhaps the simplest plugin option is WP User Avatar. This allows you to use one default local avatar for everyone, ignoring any personalized Gravatar that a commenter might have. Of course, this doesn’t give much of a personalized feel to your comments.
Alternatively, FV Gravatar Cache and Harrys Gravatar Cache will download local Gravatar copies for all your commenters. They then replace the HTML references in your comments section with those. I haven’t used either, but it seems like Harrys is better maintained; FV seems to have had a significant bug for more than a year without fixing it.
Hide / Show Comments
Earlier I mentioned the idea of lazy loading comments and said that the various plugins to toggle the display of comments on a page don’t really help with page speed issues. Still, if you are interested in such functionality, even if only for presentation purposes, here is some code I adapted from Benedict Eastaugh’s hidden_comment_form.php script. You could place this in a custom plugin, as he does, or in your functions.php
file, but I just added it directly to my comments file:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 | // ------------------------------------------------------------------------------------------------------- // --- Hide the comment form until the user clicks a link to reveal it. // --- REF: https://gist.github.com/beastaugh/200615 // ------------------------------------------------------------------------------------------------------- function hcf_toggle_form_js() { $num_comments = get_comments_number(); if ( $num_comments == 0 ) { $numcomments = __('There Are No Comments'); } elseif ( $num_comments > 1 ) { $numcomments = 'There Are <span style="color:red">' . $num_comments . __('</span> Comments'); } else { $numcomments = __('There Is <span class="redtext">1</span> Comment'); } $text = __($numcomments.'. Click to Load »', 'jmb_hidden_comment_form'); print <<<TOGGLE_FORM_JS <script type="text/javascript"> (function() { var $ = jQuery, form = $('#have-comments').hide(), hidden = true; form.after('<div id="cf_toggle" class="aligncenter whitetext">${text}</div>'); $('#cf_toggle').click(function() { if (!hidden) return ; form.show(); $(this).hide(); hidden = false; }); })(); </script> TOGGLE_FORM_JS; } if ( have_comments() ) hcf_toggle_form_js(); |
There are a couple of things to point out. First, I am hiding a <div>
element with id of have-comments
. I added that element to my comments.php
form to surround just the existing comments I wanted to hide. You can use an existing <div>
id that your theme uses or add one like I did. You can also choose to include the comment form in the hidden <div>
or leave it visible at all times (as I do).
My toggle button has “View All Comments” as the text and is placed in a <span>
with id cf_toggle
. You will want to add styling for that <span>
in your style.css
file and you may prefer to make it a <div>
instead. I use a <span>
because I also include the number of existing comments in <H2>
tags and I would like to keep that and the toggle button inline. Change anything and everything to suit your tastes.
Image Optimization
I had a few strange problems with images above the fold triggering the prioritize visible content warnings with the older version of PageSpeed Insights. Without going into details, it had something to do the dimensions of the image, though not the file size. I confirmed this on all my sites by simply experimenting with smaller width and height values for my logo (not actually replacing the logo, just scaling it down).
I haven’t seen that issue on the current version PageSpeed Insights. Perhaps because of the changes I previously made or perhaps it is no longer a testing quirk.
If you do get any strange warnings for above the fold images, try two things. First, make sure you specify a width and height. If you aren’t, that might be enough to clear the warning. Next, if that doesn’t help, remove any images above the fold to see if they are the problem. If so, then start playing with different, smaller dimensions until you get a clear test result.
Also on the topic of images, the folks behind my image optimization plugin of choice, Ewww Image Optimizer, offer a few useful tips on dealing with images and page speed, including: stripping out metadata, configuring the plugin to optimize images that aren’t in your uploads directory (e.g., images from a plugin), and dealing with the WebPageTest image requirement.
I also recently read an article hawking the service Cloudinary (note that there is a free plan as well). While it is a sponsored post for a commercial product, the content is actually very informative and covers several ideas about how images affect site performance that I hadn’t considered before.
Limit Post Revisions
Post revisions consume space in your database. You can set the number of revisions WordPress saves for each article. Just add this line of code within your WP-config.php file.
define( 'WP_POST_REVISIONS', 3 );
This code limits WordPress to only keep your last three revisions of each page or post, and automatically discard older revisions.
Optimize Your WordPress database
In addition to regularly backing up your database, you should occasionally optimize it as well. Frankly, I doubt this will make much difference in your site performance but I mention it just the same. Two plugin options are WP-Sweep and Optimize Database after Deleting Revisions.
Remove Emoji Support
WordPress automatically converts emoji text to the corresponding graphic. It might seem insignificant, but it does cause performance issues. Disable them by adding the following lines (credit: Thomas Vanhoutte) to your child theme’s functions.php
file
1 2 3 4 | remove_action( 'wp_head', 'print_emoji_detection_script', 7 ); remove_action( 'admin_print_scripts', 'print_emoji_detection_script' ); remove_action( 'wp_print_styles', 'print_emoji_styles' ); remove_action( 'admin_print_styles', 'print_emoji_styles' ); |
Remove Query Strings
Many caching solutions ignore external resources (scripts) that include query strings. Removing query strings is a low-risk (but not no-risk) action and easy to do. Many optimization plugins include this feature so if you are using one that does, simply enable it. You can also choose a special plugin for this, but the code required is so easy that I recommend doing it manually instead. Just use the following code.
1 2 3 4 5 | function pu_remove_script_version( $src ) { return remove_query_arg( array('ver', 'rev'), $src ); } add_filter( 'script_loader_src', 'pu_remove_script_version' ); add_filter( 'style_loader_src', 'pu_remove_script_version' ); |
Serve Content from a Cookieless Domain
If your site sets a cookie, all HTTP requests must include it. This is really unnecessary when requesting static content like images, JavaScript and CSS and thus some page speed tests will give you a warning or penalize your score. You can solve this problem most easily by using a CDN. You could even create a separate domain or subdomain to serve cookies, but if you use HTTP/2 there is no need to worry about this.
Spiders & Bots
If you are on a shared server, numerous visits from search engine spiders and bots can increase the load on your server and possibly affect performance. This isn’t really something that will affect any of your test scores (unless you run a test when many spiders visit, an unlikely scenario). Still, as part of an overall optimization effort you should consider what bots you want to allow, how often they can visit, and which pages and/or content they can see. Chris Christensen’s article, “Increase WordPress Website Speed for Free (using .htaccess and robots.txt),” offers some good advice in all these regards.
Support for the If-Modified-Since header?
I quickly mentioned the idea of server headers above. The reason they came to my attention was because some of my sites were failing Varvy’s If-Modified-Since
test. Other major page speed testing sites don’t include this test and therefore it probably won’t affect your scores. However, that doesn’t mean it isn’t important.
The basic explanation of the If-Modified-Since
header is that your server should send a “304 Not Modified” response if the page has not been modified since it was last requested. Using this saves bandwidth and reprocessing on both the server and client, as only the header data must be sent and received in comparison to the entirety of the page being served again.
Usually, a well configured server will already support the If-Modified-Since
header and you won’t need to do anything. But, if your server doesn’t, you can generally add either a Last-Modified
or an Etag
header yourself. This is something you might expect WordPress to add by default, but it doesn’t. You can, however, can add either or both of those headers with either a plugin or some custom code.
If you are using WP Super Cache, there is an advanced setting for “304 Not Modified” browser caching that should hopefully do the job without need for any extra plugin or code. Otherwise, the SEO Header Easy plugin will most likely take care of this but, unfortunately, that plugin is four years old and has no support threads and only 500 installs.
To manually add the Last-Modified header in your own customized plugin or your theme, there is some useful sample code at StackExchange. If you go that route, use the previously recommended REDbot or last-modified.com. Either will issue a specialized request necessary to test if your server configuration sends the proper 304 response header.
Summary
Thanks for reading this far!
This tutorial took me weeks to write (after even more weeks to learn). I included everything I could think of that might possibly be relevant and useful. Perhaps I added too much? I tried not to get too technical and explain things clearly, but please let me know if anything I wrote left you scratching your head.
If you found this guide helpful in your optimization efforts, I would really appreciate you sharing it with others.
Reference: Useful PageSpeed Insights and Other Articles
There are a lot of articles and tutorials on this topic. Below are some that I found helpful in my own learning process.
- “The Truth about WordPress Performance [#1/2]” by Andreas Hecht, takes a broad look at the pieces of the optimization puzzle.
- “The Truth About WordPress Performance [#2/2]” by Andreas Hecht, provides a detailed tutorial on steps to take to improve your site performance.
- “Why Trying to Get 95+ on Google PSI for Your WordPress Site Will Drive You Mad!” by Raelene Wilson is interesting primarily because it shows efforts that resulted in a less than stellar Google score (especially for mobile) but excellent scored on alternative tests (Pingdom and GTmetrix)
- “Get PageSpeed 100 by using W3 Total Cache and Autoptimize, Part 1” by Baruch Youssin offers a good, detailed tutorial that also demonstrates quite a lot of general knowledge about optimization issues. A good read to accompany this article and proof that you can get a great score using a completely different set of tools to what I (or others) have used.
- “How to Increase Website Speed and Make Your Blog Load Faster” by Harleena Singh chronicles the author’s attempts to understand and improve her PageSpeed Insights score. The article, written from a tech novice’s perspective, offers specific tips for working with W3TC and CloudFlare.
- “Speed Up Your WordPress Site” by Chris Burgess
- If you’d like to know more about enqueuing, a couple of good (but perhaps a bit technical) articles are, “How to Include JavaScript and CSS in Your WordPress Themes and Plugins” by Japh Thomson and “The Ins and Outs of The Enqueue Script For WordPress Themes and Plugins” by Jason Witt.
- “Optimize CSS delivery” and “Eliminate render blocking css” by Patrick Sexton offer a good overview of issues involved with CSS.
- “More Optimization Techniques for Improving Website Speed” by Pedro Semeano
- “Benchmarking the Fastest WordPress Cache Plugins” by Philip Blomsterberg shows testing results for a 20 different WordPress caching plugins (though using nginx not Apache). TL;DR: WP-Rocket was the winner with WP Super Cache, W3 Total Cache, and WP-cache.com all performing admirably.
- “WP Super Cache Vs W3 Total Cache Vs WP Fastest Cache Vs Hyper Cache Vs Quick Cache Vs Wordfence Security”
- “WordPress Caching Plugins – The What, How and Why” by Sorin
- “The Best WordPress Caching Plugins and Why Testing Them is So Important” by Kevin Muldoon
- “Speed up smaller websites by migrating them to CloudFlare Free” by James Sanders discusses how websites can benefit by signing up for CloudFlare Free.
- If you decide to use Amazon’s Cloudfront CDN, two useful articles are Daniel Pataki’s “Moving WordPress Media To The Cloud With Amazon S3” and my own “Speeding Up Your WordPress Blog with W3 Total Cache (W3TC).” The first details how to setup Amazon S3 and Cloudfront (both needed) and use the Amazon S3 And Cloudfront plugin. The second discusses using Cloudfront with W3TC and—more usefully—offers detailed instructions for setting up a CNAME so that instead of your images showing amazon.com or cloudfront.net URL locations you can have them show something like media.yourdomain.com. This is better for both SEO and usability.
- Alice Orru’s WP Rocket article explains the changes PSI made in late 2018.
- Ilya Grigorik is a web performance engineer at Google. His site has a lot of useful information, especially his 130 page Building Faster Websites crash course on web performance.
- Michael Bely at Research as a Hobby offers a good, investigative look at how useful TTFB is for judging hosting speed. Since this is a factor in most page speed tests, it is interesting to know more about it.
- Benjamin Estes gives a skeptical reading of each of the PSI components as well as providing a good high-level look at what affects page speed.
- Calculate how much mobile speed can mean for you. Do the math: You could earn more with a faster mobile website.
- If you are running your own VPS or dedicated cPanel server, Mike Jung has a wiki of useful (if a little dated) information on how to optimize WordPress on a cPanel server.
Click to See or Add Your Own »
That’s an amazing piece of content, great work. You’re used to GTMetrix, Pingdom, and Page Speed… I would be really glad to have your feedback on
Looks interesting but just gave it a quick try and you need to register to see detailed test results. Not a big fan of that, especially since the other tests don’t require it.
Thanks for the feedback. I understand, but registering allows you to create a free monitoring, and I can ensure you we do not send a lot of emails (except monitoring alerts and weekly monitoring digests). Moreover, you’ll be able to disable our “marketing” emails from your profile.
I understand, but I am not interested in free monitoring and not sending a lot of emails can’t compete with not sending any, which is what you get with other tests that don’t require registration. If the other tests were not very good maybe it would be a harder decision, but those tests are all excellent. What does yours provide that is different? I don’t mean to be critical, I genuinely want to know. Perhaps knowing that would make myself and others more comfortable with registering.
“but those tests are all excellent”: of course we’re not as affirmative as you on this 😉
So, here is what you will get using dareboost.com:
– some checkpoints that are not available anywhere else, about pro tips (for example, this http://calendar.perfplanet.com/2014/the-pain-of-duplicate-scripts/)
– some checkpoints dedicated to the technologies used by your website (you’re using JQuery? We’ll check performance best practices in your jQuery code
– some “generic” checkpoints, customized for the technologies used on your website (you’re using WordPress? Your CSS is not minified neither combined? We will tell you about WP Rocket)
– some metrics you will not find on other tools listed below (performance timings for example, and in a couple of weeks, we will add SpeedIndex, VisuallyComplete, etc)
– we’re not only dealing with performance, for us performance is a major part of web quality, but we’re looking for all front-end technical issue (SEO, compliance, security, etc)
And yet we provide this for free for homepages. We ask an email, because our product is young and we still need to add a lot of amazing features. Getting users when other tools are famous is not easy, so we want to be able to keep our users informed as long as our product evolve 🙂
Thanks for the additional information.
Well, I’m into this as a advanced newbie who was totally confused about what to do in the real word about all the magnification issues I’d like to see this tools opinion. And free too, thanks. 🙂
Most thanks to JB for this AMAZING article/tutorial!! I am excited to get started.
Thank you for this in depth piece of content! I’m not so technically skilled when it comes to coding but still I have been trying to optimize my google page speed score for weeks. Just last week I discovered the Speedbooster plugin and in addition with the W3 total cache plugin I was able to get a 100 % score for my most popular pages just about an hour after I installed it. Really impressed by this plugin. I haven’t been able to get a 100% for my homepage, but that’s mainly because of external services (like Google itself!) and I’m not too worried about it because multiple speed tests confirm that my homepage loads in less than a second.
I actually arrived at your page because I was googling to find out if the speedbooster plugin is compatible to do A/B testing in Adsense. There doesn’t seem to be much info on that. Would you happen to know the answer?
Good question but unfortunately I don’t know the answer. Congratulations on getting a 100 score!
If you decide to use WebPageTest, here is a great overview
The SEO Expert’s Guide to Web Performance Using WebPageTest
https://moz.com/blog/the-seo-experts-guide-to-web-performance-using-webpagetest
For a couple of months, I have try to search how to 100/100 google pagespeed with wordpress with google. I have read more than 15 article about this. And this is one of the greatest article how to optimize the wordpress page speed I have ever read,
Dam* you google, why you don’t put this article in the page one of your search result.
If you are not mind, could you use twenty fifteen as a default theme as your research?
I have tried optimize twenty fifteen theme, but only got 82 (mobile) and 92 (desktop) point in google pagespeed with nginx webserver.
Now i tried to look optimizing with litespeed webserver.
Another reference I found about 100/100 google pagespeed is from mattyl.co.uk/2014/06/15/how-to-score-100100-with-google-pagespeed-insights-tool/ with twenty fourteen themes and nginx webserver module.
Thank you so much, I hope you always have a wonderful days because of this article.
I know 18,871 words is a looooooooong article.
Again, “Dam* you google, why you don’t put this article in the page one of your search result.”
Sorry, I somehow missed this comment but thanks for the kind words. Well, I am not actually a developer so this was all for hobby and my own benefit, so the idea of using twenty fifteen as a default theme to replicate this work doesn’t sound like too much fun for me 😉
I have read many people discuss the merits of nginx/litespeed and I am sure they aren’t necessarily wrong, but my basic attitude is to use what the most people use so when problems arise (they always do), it will be easier to find answers. If your technical ability is such that this isn’t a concern, by all means, try that out, but as I have demonstrated, you CAN get 100/100 on regular old Apache on a shared or low-end VPS system so I don’t think a drastic OS switch is really necessary or even the best use of your time and the result is far from certain anyway. Good luck with your efforts and report back if you have success.
That’s an amazing piece of work! It’s going to take me a little time to digest a lot of it!
You picked up on some remarks I’d made about HTTP/2 and I wanted to underline that again. A lot of the conventional wisdom about tuning for webpages in general and WordPress in particular are really about overcoming shortcomings in HTTP/1. So, already we’re seeing that the speedtests are due a huge overhaul. In terms of mod_pagespeed, it’s true that it’s a powerful and (I think) under-utilised tool, but it’s also true that some of the features there are also fixing problems that aren’t there with HTTP/2. But I am certain, once it’s installed, it’s an easy way to optimise images and deploy a CDN.
You quickly mentioned database optimisations which was the right amount of weight to give it. Most wordpress database optimisation is based on a seriously broken understanding of how databases work. I wrote another post on that :
https://www.alpine.link/making-wordpress-faster-database-optimisation/
And of course, if you’re page caching then database queries aren’t a significant part of page load time anyway!
Interesting read, Ian, thanks for sharing. And, did you notice that last week Cloudflare announced they are activating HTTP/2 by default for all free plans? That seems promising, and confirms what you originally wrote that it is coming along.
I see CloudFlare have also published a blog about this :
https://blog.cloudflare.com/http-2-for-web-developers/
That says more or the less the same. People don̵