The IETF HTTP Working Group is in a special place right now. It held a meeting this morning at IETF 88 on encryption and privacy; the room was packed and, just possibly, needles that matter were moved.
What’s special, you ask? Well, most standards-writing committees labor in obscurity, ignored by the actual engineers who build the world. Or alternatively, ignored by the vendors that matter, while the rest try to use the standards process to claw their way into a closed market.
Not HTTP; the guys from Chrome and Firefox and IE are in there with hammers and shovels, building the stuff in parallel with writing the specs for it, pointing out spec problems with refreshing reports like “we tried it in release 16.2 and it broke 23% of clients.”
The goal · What the people I respect want is for everything (yes, absolutely everything) transmitted across the Web to be sent in encrypted form, and with a high degree of confidence in exactly which server you’re connecting to.
The main subject under discussion today was combining HTTP with encryption, now normally done with TLS, also known (semi-inaccurately) as SSL or HTTPS.
I wrote about this a while back in Private By Default.
By the way the official minutes-in-progress are here.
ALPN · Stephan Friedl of Cisco presented ALPN, which allows an HTTP client to request which flavor of HTTP or SPDY it wants to use, then choose which TLS cert based on that protocol. It’s implemented in Mozilla, Chromium, IE11, and Google’s Web-facing servers; there’s a patch available for OpenSSL.
It looks like a small solid step forward in greasing the infrastructure wheels we need to turn to get to a private-by-default future.
HPACK · Roberto Peon presented on HPACK - Header Compression for HTTP/2.0, motivated by the CRIME attack, which worked around HTTP encryption by observing the effects of widely-used data-compression software.
I’m not enough of a crypto expert to have an opinion, but it seems like the right people are working on this; also it’s faster than the zip code that’s currently being used.
Another brick in the wall we’re building between us and our privacy enemies.
Opportunistic Encryption in HTTP · This was sort of the meeting’s main event, following the warm-ups. Basically, what we’d like to do is just make encryption mandatory for everyone all the time. But it’s tough to get there from here; there are many people who claim it’s too complicated or difficult or expensive for them; I happen to disagree with all of those arguments, but they’re out there. Also there are the fools who think you shouldn’t need to encrypt if you don’t have anything to hide, but I’ve already written on why they should be ignored.
Anyhow, the IETF is wondering if there might be a halfway point between
where we are now and everything-encrypted-all-the-time.
Mark Nottingham presented on
Encryption. The idea is that when you hit a URI that begins
the client and server co-operate to ignore that and do TLS anyhow. The idea
is controversial because the person using the client has no assurance
of privacy and, depending how you do it, you might not get the guarantee that
real TLS provides you of which server you’re talking to.
The idea is that this increases the difficulty for passive attackers like Firesheep or the NSA.
There are a couple of ways you could go opportunistic; the one Mark’s proposing is based on ALPN (see above), while Paul Hoffman has another idea based on using DNS to see if a server might be willing to switch from HTTP to HTTPS.
The hard question · Is opportunistic encryption preferable to just telling people to bloody well use TLS already?
There was lots of concern about fooling with the large-scale security model, and the debate ranged widely; I’ll excerpt some of the contributions that stuck to my brain.
Starting with my own: I pointed out that the effort to convince everyone to just use HTTPS already is actually making headway, and the reasons for not doing it are getting lamer and lamer. So whatever happens, let’s keep pushing that rock up the hill.
Alissa Cooper: “This is a gift we’re giving people, they don’t necessarily need to know they’re getting it.”
Ted Hardie: The risk of opportunistic making it easy for people to ignore doing real first-rate TLS; increases risks due to active attackers.
Patrick McManus: Is TLS for HTTP URIs a good goal? There’s real value here; not too worried about server side being tricked into thinking relaxed is as good as non-relaxed; but let’s not give up on authentication. Once we have that we can do more. But if all we can do is TLS-relaxed, we’ve moved the Web forward.
Roberto Peon (I think): “Unauthenticated encryption is the new cleartext”.
Dunno who: Does the extra chatter to set up opportunistic encryption constitute a larger attack surface?
Roberto Peon: Why do people not deploy TLS? Because it’s slower, and that matters in E-commerce. But not with HTTP2.
This might be a useful tool to help people upgrade to TLS gradually, which removes some of the fear from this process.
Is this an opportunity for server operators to engage in self-delusion around their security models? Let’s be rigorous about what we permit, lead with, and providing server authentication.
Microsoft guy: From a browser point of view, if they’re talking to a HTTP URI, it can’t be considered secure.
EKR: There’s benefit from moving the choice away from “Get everyone in the universe to do what we want them to do, or nothing” and offering this, which presumably would move some unknown percentage of traffic onto TLS.
Mnot: Some proportion of people just aren’t gonna get/deploy a cert, because it’s hard.
Keith Moore: Watch out for long-term effect... economics have changed such that if a passive attack is feasible, so is an active attack. Doesn’t think there’s a security benefit in forcing attackers into active mode.
Ted Hardie: Believes this makes passive attacks harder, but active attacks easier. (This seems controversial.) As a consequence of reducing the number of times people go for authenticated encryption. The issue is, will a lot of people say “I would have done real TLS, but now I’m just going to do opportunistic”?
Roberto Peon: If we put this out, we’ll never be able to take it back.
Googler: Mixed content (e.g. ads) adds huge friction to TLS adoption.
Roy Fielding: Doesn’t believe we can require TLS for HTTP 2; there are too many web servers on embedded chips. I could believe in a social requirement at the beginning of the spec. Don’t pretend it’s a technical argument, it’s a social argument.
Where we got to · So, in a slight modification of an IETF tradition, there was a reverse hum-off. Mark presented 5 options and, one by one, asked the people in the room to hum if they thought they couldn’t live with them. They were:
Don’t know yet.
Do nothing — hope that HTTPS gets more adoption.
Opportunistic encryption without sever authentication for HTTP URIs — just for passive attacks.
Opportunistic encryption with server authentication AND downgrade protection (somehow) for HTTP URIs; no requirement upon HTTP/2 when not available
Require (MUST) secure underlying protocol for HTTP/2.0 (at least in the Web browsing case).
It should be noted that some of the people didn’t think #3 was a real option, largely because they weren’t convinced that downgrade protection was realistic.
But, given that: #0 and #1 got huge negative hums. #2 got a moderate negahum. Both #3 and #4 got really pretty weak negahums, and as of now would have to be ranked as leaders in the IETF consensus-building process.
The real news story here is #4. There has been repeated discussion of using the arrival of HTTP 2 (still under development) as the forcing function to move toward an all-encrypted Web, and up till now, the idea never came close to consensus support. But after today, it’s obvious to me that a lot of people really, really like it.