The IETF HTTP Working Group is in a special place right now. It held a meeting this morning at IETF 88 on encryption and privacy; the room was packed and, just possibly, needles that matter were moved.

What’s special, you ask? Well, most standards-writing committees labor in obscurity, ignored by the actual engineers who build the world. Or alternatively, ignored by the vendors that matter, while the rest try to use the standards process to claw their way into a closed market.

Not HTTP; the guys from Chrome and Firefox and IE are in there with hammers and shovels, building the stuff in parallel with writing the specs for it, pointing out spec problems with refreshing reports like “we tried it in release 16.2 and it broke 23% of clients.”

The goal · What the people I respect want is for everything (yes, absolutely everything) transmitted across the Web to be sent in encrypted form, and with a high degree of confidence in exactly which server you’re connecting to.

The main subject under discussion today was combining HTTP with encryption, now normally done with TLS, also known (semi-inaccurately) as SSL or HTTPS.

I wrote about this a while back in Private By Default.

By the way the official minutes-in-progress are here.

ALPN · Stephan Friedl of Cisco presented ALPN, which allows an HTTP client to request which flavor of HTTP or SPDY it wants to use, then choose which TLS cert based on that protocol. It’s implemented in Mozilla, Chromium, IE11, and Google’s Web-facing servers; there’s a patch available for OpenSSL.

It looks like a small solid step forward in greasing the infrastructure wheels we need to turn to get to a private-by-default future.

HPACK · Roberto Peon presented on HPACK - Header Compression for HTTP/2.0, motivated by the CRIME attack, which worked around HTTP encryption by observing the effects of widely-used data-compression software.

I’m not enough of a crypto expert to have an opinion, but it seems like the right people are working on this; also it’s faster than the zip code that’s currently being used.

Another brick in the wall we’re building between us and our privacy enemies.

Opportunistic Encryption in HTTP · This was sort of the meeting’s main event, following the warm-ups. Basically, what we’d like to do is just make encryption mandatory for everyone all the time. But it’s tough to get there from here; there are many people who claim it’s too complicated or difficult or expensive for them; I happen to disagree with all of those arguments, but they’re out there. Also there are the fools who think you shouldn’t need to encrypt if you don’t have anything to hide, but I’ve already written on why they should be ignored.

Anyhow, the IETF is wondering if there might be a halfway point between where we are now and everything-encrypted-all-the-time. Mark Nottingham presented on Opportunistic Encryption. The idea is that when you hit a URI that begins http://, the client and server co-operate to ignore that and do TLS anyhow. The idea is controversial because the person using the client has no assurance of privacy and, depending how you do it, you might not get the guarantee that real TLS provides you of which server you’re talking to.

The idea is that this increases the difficulty for passive attackers like Firesheep or the NSA.

There are a couple of ways you could go opportunistic; the one Mark’s proposing is based on ALPN (see above), while Paul Hoffman has another idea based on using DNS to see if a server might be willing to switch from HTTP to HTTPS.

The hard question · Is opportunistic encryption preferable to just telling people to bloody well use TLS already?

There was lots of concern about fooling with the large-scale security model, and the debate ranged widely; I’ll excerpt some of the contributions that stuck to my brain.

Starting with my own: I pointed out that the effort to convince everyone to just use HTTPS already is actually making headway, and the reasons for not doing it are getting lamer and lamer. So whatever happens, let’s keep pushing that rock up the hill.

Alissa Cooper: “This is a gift we’re giving people, they don’t necessarily need to know they’re getting it.”

Ted Hardie: The risk of opportunistic making it easy for people to ignore doing real first-rate TLS; increases risks due to active attackers.

Patrick McManus: Is TLS for HTTP URIs a good goal? There’s real value here; not too worried about server side being tricked into thinking relaxed is as good as non-relaxed; but let’s not give up on authentication. Once we have that we can do more. But if all we can do is TLS-relaxed, we’ve moved the Web forward.

Roberto Peon (I think): “Unauthenticated encryption is the new cleartext”.

Dunno who: Does the extra chatter to set up opportunistic encryption constitute a larger attack surface?

Roberto Peon: Why do people not deploy TLS? Because it’s slower, and that matters in E-commerce. But not with HTTP2.

This might be a useful tool to help people upgrade to TLS gradually, which removes some of the fear from this process.

Is this an opportunity for server operators to engage in self-delusion around their security models? Let’s be rigorous about what we permit, lead with, and providing server authentication.

Microsoft guy: From a browser point of view, if they’re talking to a HTTP URI, it can’t be considered secure.

EKR: There’s benefit from moving the choice away from “Get everyone in the universe to do what we want them to do, or nothing” and offering this, which presumably would move some unknown percentage of traffic onto TLS.

Mnot: Some proportion of people just aren’t gonna get/deploy a cert, because it’s hard.

Keith Moore: Watch out for long-term effect... economics have changed such that if a passive attack is feasible, so is an active attack. Doesn’t think there’s a security benefit in forcing attackers into active mode.

Ted Hardie: Believes this makes passive attacks harder, but active attacks easier. (This seems controversial.) As a consequence of reducing the number of times people go for authenticated encryption. The issue is, will a lot of people say “I would have done real TLS, but now I’m just going to do opportunistic”?

Roberto Peon: If we put this out, we’ll never be able to take it back.

Googler: Mixed content (e.g. ads) adds huge friction to TLS adoption.

Roy Fielding: Doesn’t believe we can require TLS for HTTP 2; there are too many web servers on embedded chips. I could believe in a social requirement at the beginning of the spec. Don’t pretend it’s a technical argument, it’s a social argument.

Where we got to · So, in a slight modification of an IETF tradition, there was a reverse hum-off. Mark presented 5 options and, one by one, asked the people in the room to hum if they thought they couldn’t live with them. They were:

  1. Don’t know yet.

  2. Do nothing — hope that HTTPS gets more adoption.

  3. Opportunistic encryption without sever authentication for HTTP URIs — just for passive attacks.

  4. Opportunistic encryption with server authentication AND downgrade protection (somehow) for HTTP URIs; no requirement upon HTTP/2 when not available

  5. Require (MUST) secure underlying protocol for HTTP/2.0 (at least in the Web browsing case).

It should be noted that some of the people didn’t think #3 was a real option, largely because they weren’t convinced that downgrade protection was realistic.

But, given that: #0 and #1 got huge negative hums. #2 got a moderate negahum. Both #3 and #4 got really pretty weak negahums, and as of now would have to be ranked as leaders in the IETF consensus-building process.

The real news story here is #4. There has been repeated discussion of using the arrival of HTTP 2 (still under development) as the forcing function to move toward an all-encrypted Web, and up till now, the idea never came close to consensus support. But after today, it’s obvious to me that a lot of people really, really like it.



Contributions

Comment feed for ongoing:Comments feed

From: David Magda (Nov 05 2013, at 18:14)

For opportunistic encryption: why not simply do an HTTP 301 response to the same URI but with HTTPS as the protocol scheme?

Also add an HSTS header so the client will use HTTPS in the future by default for all future requests:

https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security

[link]

From: Chips (Nov 05 2013, at 18:19)

Worrying about embedded chips sounds like betting against Moore's law.

You can buy a complete GSM featurephone (real price, unsubsidized) for $15 now (and that must be capable of encryption, on top of everything else).

By the time HTTP/2 becomes widespread you'll probably be able to buy a bucket of HTTP/2-capable chips for a dollar.

I'd rather worry about CA system without cert pinning being worthless (AFAIK any government may coerce any CA to cooperate and break the whole system).

[link]

From: John Cowan (Nov 05 2013, at 20:37)

This is beating the air, for two reasons:

1) It's prudent to assume that the publicly known encryption algorithms have already been broken by various three-letter agencies.

2) Keyloggers at the client end and subpoenas in the cloud make so-called "end to end" encryption a dead letter.

3) Practical cryptanalysis beats analytic cryptanalysis anytime.

[link]

From: David Magda (Nov 06 2013, at 04:44)

I disagree with John Cowan.

1. There plenty of algorithms which three-letter agencies are dog fooding to protect their own data. AES, ECDH, SHA-2 to name a few. All are cleared for, and are used for, protecting SECRET and even TOP SECRET data. If you use the Suite B stuff you'll probably be fine. If you don't trust those, there are standards from other governments (EU, Russia, Japan) that can be used. Just because seat belts are not perfect and there are situations where they won't save you doesn't mean they're completely useless. At the very least it takes extra processing (see 3 below).

2. Statistically you're more vulnerable from wire/glass tapping than a key logger. And with subpoenas: as a non-American, I'd rather take my chances with the Watches and having some oversight with court orders than all of my data simply being hovered up like it can now with plain-text. At the very least if the Watchers have to involve third parties we increase the chances of push back and leaks because more people are involved.

3. I'm not sure what this means exactly, but in general, I'd argue that forcing the Watchers to do extra work to get at your data is a good thing. Right now, for a lot of traffic, they can simply tap and analyze. If they had decipher everyone's bits first, it makes blanket surveillance harder.

[link]

From: David Magda (Nov 06 2013, at 11:25)

BTW, a useful tool I ran across is the SSL Server Test by Qualsys:

https://www.ssllabs.com/ssltest/analyze.html?hideResults=on&d=tbray.org

https://www.ssllabs.com/ssltest/

It's a good idea to check any sites you're in charge of (at least) twice a year as what's "safe" is a bit up in the air nowadays. There's a "best practices" guide with example configurations available. While they often talk about web servers, the principles should apply to anything that runs over TLS (SMTP, XMPP, etc.).

Pretty good weblog by one of the developers if you're interested in SSL/TLS:

http://blog.ivanristic.com

[link]

From: Grahame Grieve (Nov 06 2013, at 17:01)

"Require (MUST) secure underlying protocol for HTTP/2.0 (at least in the Web browsing case)"

That's a big "at least" - if it's not mandatory at the technical level, then it resolves to a policy issue. But I don't see how making it mandatory at the helps any - in fact, I suspect it hinders.

The problem is not processing power, but certificate management. I presume that it would be perceived to ruin the point if self-signed certificates were allowed (MITM etc). So all these embedded devices that want to make use of the improved efficiency of http/2 have to do what for their certificates?

The quickest way to resolve this is to define a new secure web browser specification where "unsecured" access is treated the same as self-signed certificates, and needs user confirmation. That'll force everyone to get certs for their web facing sites ASAP

[link]

From: John Cowan (Nov 06 2013, at 19:33)

David Magda writes:

"There plenty of algorithms which three-letter agencies are dog fooding to protect their own data. AES, ECDH, SHA-2 to name a few. All are cleared for, and are used for, protecting SECRET and even TOP SECRET data."

So we are told. Is there independent evidence that this is actually true?

"Just because seat belts are not perfect and there are situations where they won't save you doesn't mean they're completely useless."

Granted, but the cost of wearing seatbelts (now that they are mandatory) is fairly low — and even now, lots of people don't bother.

"At the very least if the Watchers have to involve third parties we increase the chances of push back and leaks because more people are involved."

It's far from clear that leaks are a Good Thing: they may simply push the powers that be to use more disruptive methods.

"I'm not sure what this means exactly, but in general, I'd argue that forcing the Watchers to do extra work to get at your data is a good thing."

Practical cryptanalysis is jargon for obtaining keys by theft, blackmail, torture, etc.

"Right now, for a lot of traffic, they can simply tap and analyze. If they had decipher everyone's bits first, it makes blanket surveillance harder."

But not harder enough to be worth the costs of conversion, IMO.

[link]

From: Joseph Scott (Nov 07 2013, at 08:36)

If we are going to do a full version bump of HTTP, then requiring TLS for it seems like a potentially good thing.

In terms of getting better from where we are now, by which I mean getting more people to deploy TLS today, the best thing you can do is to improve the deployment of SNI capable clients and servers. The biggest target there is IE on Windows XP. If we could some how convince Microsoft to deploy a patch to support SNI in IE on Windows XP that would be a huge step forward.

[link]

From: Lennie (Nov 10 2013, at 10:43)

@Joseph Scott

It's not just IE on Windows XP, Google made a big mistake with the early Android versions. The browser on Android 2.x does not support SNI.

Android 2.x still had more than 30% of the Android market in August, this is finally dropping pretty fast at about 2% per month, so currently it is at 26%. So if it continues at the same pace, it will take a year.

SO I guess with the latest numbers it isn't as bad as it used to be.

Getting companies of Windows XP will probably take longer.

Judging by a some what larger site I'm running IE is the most popular browser on Windows XP, it comes fourth place overall behind Chrome and IE and Firefox on Windows 7.

So that will take time.

[link]

From: Lennie (Nov 11 2013, at 03:04)

After listing to the recording of the workgroup meeting.

I think I'm with what Robert Peon said HTTP/2.0 with only authenticated TLS is the right solution for now. If HTTP/2.0 isn't enough incentive for people to use authenticated TLS, unauthenticated TLS could be added later.

Also I wonder how a server can know if the user is connected with a HTTP-URL over unauthenticated TLS or authenticated TLS with HTTPS-URL.

[link]

author · Dad
colophon · rights
picture of the day
November 05, 2013
· Technology (90 fragments)
· · Internet (116 more)
· · Security (39 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!