What Is HTTP/2 and Why It Matters for Your Website
If you have ever wondered why some websites feel snappy even when loading dozens of assets, a big part of the answer is HTTP/2. Released in 2015 as a successor to HTTP/1.1, HTTP/2 fundamentally changed how browsers and servers talk to each other.
The Problem HTTP/2 Was Built to Solve
HTTP/1.1 suffers from head-of-line blocking. A browser opens several TCP connections and sends requests one at a time per connection. If one request is slow, everything behind it waits. Browsers work around this by opening six to eight parallel connections per domain, but that is a clumsy patch.
The result is that loading a modern webpage, which might pull in sixty or eighty separate assets, involves a lot of waiting in queues. You can see this vividly in browser waterfall charts: long horizontal bars of idle time between one request and the next.
How HTTP/2 Works Differently
HTTP/2 replaces the text-based message format with a binary framing layer. Everything is broken into small units called frames. Multiple frames from different requests can be interleaved on a single TCP connection and reassembled at the other end. This is multiplexing, and it eliminates head-of-line blocking at the application layer entirely.
Multiplexing
With HTTP/2, a single connection can carry many requests and responses simultaneously. The browser does not need to wait for image A to finish downloading before starting to request stylesheet B. Both happen at the same time over the same connection. In practice this means fewer open sockets, less TLS overhead, and faster page loads, especially on high-latency connections.
Header Compression
HTTP headers carry a lot of redundant information. On every request, a browser sends the same Accept, User-Agent, and Cookie headers over and over again. HTTP/2 uses a compression scheme called HPACK that maintains a table of previously seen headers and replaces repeated ones with a short index reference. For a page with many requests this can reduce header overhead by 80 percent or more.
Server Push
Server push lets the server send resources to the browser before the browser knows it needs them. When a browser requests index.html, the server can proactively push style.css and app.js along with it, saving round trips that would otherwise happen when the browser parses the HTML and discovers those dependencies.
Stream Prioritisation
HTTP/2 lets the client assign priorities to streams. The browser can tell the server that the render-blocking stylesheet needs to arrive before the large hero image. Servers that respect these hints can deliver better perceived performance even when bandwidth is constrained.
HTTP/2 Requires HTTPS
While the HTTP/2 specification technically allows unencrypted connections, all major browsers only implement HTTP/2 over TLS. This means enabling HTTP/2 also means you must have a valid SSL certificate. TLS negotiation and HTTP/2 upgrade happen together in a single round trip using ALPN, so there is no extra cost.
How to Check if a Site Uses HTTP/2
Open the browser developer tools, go to the Network tab, reload the page, and look at the Protocol column. You will see h2 for HTTP/2 connections, http/1.1 for legacy connections, and h3 for the newer HTTP/3. You can also use curl with the verbose flag to see protocol negotiation in the handshake output.
Does Your Hosting Support HTTP/2?
Most major hosting providers enabled HTTP/2 several years ago. nginx has supported it since version 1.9.5, Apache since 2.4.17 with mod_http2, and LiteSpeed natively. If your hosting control panel is recent you almost certainly already have it. Some older shared hosting environments still serve HTTP/1.1, and that is a genuine performance disadvantage.
The easiest way to check is to inspect the Protocol column in your browser devtools or run an online HTTP/2 test. If your site is still on HTTP/1.1 and you are on a modern hosting plan, confirm HTTPS is enabled first since that is a prerequisite, then contact your host about enabling HTTP/2.
HTTP/2 vs HTTP/3
HTTP/3 is the next step and replaces the TCP transport layer with QUIC, which is built on UDP. The main goal is to solve the remaining head-of-line blocking that occurs at the TCP level when a packet is lost. HTTP/2 solves application-level blocking but a single lost TCP segment still pauses all multiplexed streams until it is retransmitted. QUIC handles each stream independently so one lost packet only affects that one stream.
HTTP/3 is already widely deployed on major CDNs and growing fast. For most sites today, the practical improvement from HTTP/2 is large and the improvement from HTTP/3 on top is smaller. Getting HTTP/2 right first is the sensible path.