Just setting up Let’s Encrypt and forgeting it might not be enough sometimes. We may want to squeeze more out of our server based on what data we serve.
Thankfully, there are a couple of easy tweaks we can do to speed things up.
HSTS or HTTP Strict Transport Security
This is basically a security enhancement, and it restricts web browsers from accessing the web server only over HTTPS. It eliminates the possibility of establishing a connection through an insecure HTTP connection.
Adding this directive is pretty safe and supported by all modern browsers (except Opera Mini).
You just have to add this in the server block of your website’s .conf file. I add it after the SSL certificates and ssl_dhparam:
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload;";
Reload/restart Nginx, and the changes will be live.
I think HSTS should be used by any public-facing website. And anyway, you probably already use HTTPS as nowadays it’s not quite optional. It even helps with SEO a little!
ssl_session_cache in Nginx
The SSL Session Cache makes it so the client does not have to do a full SSL-handshake on every request. This saves time and CPU resources. This is useful when using keep-alive connections over SSL.
So you really should set this up.
I like to specify this in my website’s .conf file, in the server block. Usually after the certificates and ssl_dhparam, I add this:
ssl_session_cache shared:MozSSL:10m;
The above setting should handle up to about 40000 sessions pretty well. Remember to reload/restart Nginx to apply the setting.
I did notice a pretty much instant reduction in the TTFB. Pretty neat!
ssl_buffer_size in Nginx
I usually set this to 4k for 90% of the scenarios. It’s a good value to have a good TTFB and reasonable for larger data.
Its default value is 16k, and some people recommend even setting it to 1400 bytes or a little lower to fit in one MTU.
While a low value like 1400 bytes would help with avoiding unnecessary delays (packet loss/jitter), it’s usually more useful for interactive traffic.
Some people even go up to 64k for large data streaming websites.
I suggest you test with 4k and go from there. You might find, like me, that 4k is just right. As in not too low, not to high.
Add the below line in the server block after the ssl_session_cache:
ssl_buffer_size 4k;
OCSP Stapling – ssl_stapling & ssl_stabling_verify in Nginx
OCSP comes from Online Certificate Status Protocol. This protocol does not require the browser to spend time downloading and searching a list for the certificate information. The browser simply posts a query and receives a response from an OCSP responder.
OCSP Stapling can be used in order to enhance the OCSP Protocol by letting the website be more proactive in improving the client experience. Basically, the web server queries the OCSP responder directly and then caches the response.
These settings usually save about one roundtrip, especially in uncontrollable network conditions, and that may remove some of the delays experienced.
After the above lines from the previous tips, just add:
ssl_stapling on;
ssl_stapling_verify on;
That’s it for now
The above settings and ideas should give you a decent boost in performance in specific scenarios. I strongly advise you to go with 4k for the ssl_buffer_size unless you actually have a solid reason to go lower/higher.
By the way, you can and should apply the above to your WordPress installation if you’ve followed my guide on installing WordPress on a LEMP stack.