If you’re a geek, you know what “HTTP” is. If you’re not, you’ve still seen those letters, lurking at the front of URLs everywhere. It’s one of the two or three things that makes the Web actually work. It’s being redesigned, perhaps. This telling of the story is mostly for geeks, but for the rest: If this effort is successful, you might notice some things run a little quicker. If it fails, you might notice some things running slower, or getting more expensive, and the Net growing a little less private and safe.
Back Story · When we talk about HTTP versions we use slashes: HTTP/0.9, HTTP/1.0, HTTP/1.1, and so on. The first version to see the light of day was HTTP/0.9. By the time the Web became popular, we’d mostly managed the transition to HTTP/1.0. HTTP/1.1 was specified in 1996, and has been widely implemented, but I’d say it hasn’t been as much of an improvement on HTTP/1.0 as people hoped for.
But still, no matter how you measure it, HTTP has been the most successful application protocol ever invented, by a wide margin.
There’s long been discussion of HTTP over at the IETF, but over the course of the last year, there’s been a formal launch of a process aimed at delivering HTTP/2.0.
HTTP’s Dirty Secret · The vast majority of application traffic across the Internet, including HTTP, has historically been based on the TCP protocol, which allows a program on one computer to establish a connection to another on another, and interchange arbitrary data. Two things about TCP connections turn out to be important. First, they’re kind of expensive and slow to set up (but reasonably cheap and efficient once they’re running). Second, they don’t last forever; on a global scale, the network is a fragile thing, and eventually, your connection will go down.
It turns out to be really hard to design applications to use the Net in a way that works around connections that arbitrarily break under you. It also turns out that HTTP neatly solves the second problem, and lets you build Web apps that scale beautifully and degrade gracefully; but the solution involves making tons and tons and tons of different TCP connections. HTTP is a poor citizen of the TCP-centric Internet.
This has driven the IETF greybeards crazy for decades, but they’ve gotten no relief, because HTTP’s application-level advantages are so huge that it’s being used for more and more and more every day. Which includes that slick little smartphone in your pocket; If HTTP were a better Internet citizen, your battery might last longer.
The HTTP/2.0 process. · They’re working on their objectives, but I think it’s fair to say that two dominate: Better performance on the TCP-based Internet, and better security, where “security” is a big multi-dimensional word referring to several different big multi-dimensional problems.
There is a back story here, and it’s spelled “SPDY”.
This is a drop-in replacement for HTTP networking that should be invisible to
both clients and servers. It was cooked up at Google, and is built into
Chrome and Firefox, and a lot of the servers at Google use it. It may be
Mike Belshe, who’s sort of seen as the
guy behind SPDY, used to work for Google but doesn’t any more.
[Update: In the comments, Mike makes it clear he’s still participating.]
The State of Play · I could go on and on about the proposals and what people are saying, but I think readers here who actually care shouldn’t listen to me, they should consult the primary references. So:
The mailing list where discussion happens, from which:
Expressions of interest from: Alibaba, curl, Facebook, Firefox, Google, haproxy, Jetty, Microsoft, Squid, Twitter, and Varnish.
This should be fun to watch.
Comment feed for ongoing:
From: Julian Reschke (Jul 15 2012, at 11:52)
It should be mentioned that the same IETF Working Group currently works on a set of documents *revising* HTTP/1.1 (as defined in RFC 2616).
If you want to help, please see the HTTPbis Wiki, review the documents, and provide feedback to the mailing list!
From: Dave Walker (Jul 15 2012, at 12:02)
I remember when I first saw an http specification (1996 or thereabouts), thinking that there should really be either two parallel versions of it (or two separate protocols) - one looking much as it did, involving "thin" transactions and session statelessness, and the other being stateful and involving "thicker" transactions within a session.
I'm hoping this piece of work will be it.
From: Bud Gibson (Jul 15 2012, at 12:17)
My observation: Google has been a catalyst for a lot of new technologies. For them to get wider adoption, others are going to have to have at least part ownership too.
When the original work was being done as a low profile endeavor at academic institutions, progress was relatively rapid. Now that a lot is at stake and the players are all paying attention, progress is slower.
From: Lars Vogel (Jul 15 2012, at 12:31)
Nice that SPDY triggers an improvement of HTTP. From your post it is unclear if Mike Belshe is involved in the new HTTP specification process.
From: David Magda (Jul 15 2012, at 13:08)
If part of the problem is the build-up and tear-down of connections between machines, could SCTP help in that regard?
It supports multi-streaming out of box and is already present in number operating systems. Unfortunately, it's not in Windows or Mac OS X (even though it's in FreeBSD, which Apple bases a lot of stuff off of).
Of course one can run SPDY over SCTP as well:
From: Mike Belshe (Jul 16 2012, at 12:32)
Just to clarify - I'm still involved with SPDY quite a lot. I will be contributing future drafts of the SPDY I-D as necessary, and I still communicate with many people (both at Google and external to Google) about SPDY.
Google's commitment remains strong, and they are doubling down on protocol efforts, not retreating. SPDY has proven that if we build something great, we can still change the protocol world.