Surveillance on the Internet is pervasive and well-funded; it constitutes a planetary-scale attack on people who need the Net. The IETF is grappling with the problem but the right path forward isn’t clear.

This story is being reported, but (near as I can tell) not by anyone who’s on the actual mailing lists, reading what’s being said. So, here’s what’s up. The story is long and unsimple, and therefore so is this ongoing fragment; sorry.

On a perfect Internet · Everyone would be confident that their traffic is private; only they and whoever they’re connecting to could ever see it. They wouldn’t have to worry about what needs to be private and what doesn’t because everything would be.

Then, in civilized parts of the world, if a law-enforcement professional wanted access to someone’s personal information, they would follow a standardized legal process to get it. (The nature of such processes is a matter for discussion in the arena of politics not engineering.)

It’s going to be hard to get there. The Internet is really big, some of the technical infrastructure is old and inflexible, and a quite a few of the people who use it and administer it aren’t prepared to do much (if any) extra work for the sake of privacy. Also, not everyone agrees on what that perfect world is like. Also, the attackers are smart and well-funded.

On the other hand, pervasive surveillance — “Let’s vacuum up all of everyone’s traffic because we might use it later” — is only really cost-effective when accessing the data is free, or at least really cheap. So, anything we can do to drive up the cost of surveillance will improve privacy and safety for the citizens of the Net. So, let’s do those things, and not let our ongoing work towards a more perfect solution get in the way.

Raw data · Let’s start with the statement of principle, a work-in-progress by Stephen Farrell: Pervasive Monitoring is an Attack.

The IETF discussion is taking place mostly on two mailing lists: HTTP Working Group and Perpass, where the volume, intensity, emotion, and complexity all run high. I’ve been sacrificing some slices of my personal life notably including sleep in an effort to keep up.

Individuals have attempted to bring some order to the discussion; one such effort is Paul Hoffman’s Encryption Terminology. Another is my own Pervasive Privacy Pros and Cons.

Goalposts · In the near/medium-term, there are two things the IETF could do to frustrate the attackers. Both focus on the Web’s HTTP protocol, because that’s where most of the interesting traffic is. There’s other work on SMTP and XMPP and SIP and so on, but let’s stay with HTTP.

At the moment, when you hit a Web address that begins with “http:”, it’s fairly straightforward for an attacker to fool you into connecting with the wrong server, and also to intercept and read the data going back and forth. We call this “plaintext mode”.

But when it begins with “https:”, your connection should be authenticated — you can be sure that the server in the address bar is who you’re really connected to — and encrypted — prohibitively expensive, in effect impossible, for someone capturing the data to decode it. [Important note: “https:” connections can be spoofed and spied on if implemented incorrectly, or the attacker has stolen the server’s secret keys, or has planted malware on your PC/mobile.]

The first thing the IETF could do is “opportunistic encryption”: This arranges that sometimes, when you connect to an “http:” address, you’d get encryption (probably not authentication) without asking for it, or maybe even knowing.

The second has to do with what’s being called “HTTP/2.0”. For many years, the Web has been running largely on HTTP/1.1. Recently, there has been intense work on HTTP/2.0 by people at Google and the other browser builders; an interim version called SPDY is already in production, at Google among other places. Privacy advocates are proposing that HTTP/2.0 not be available in plaintext mode; authentication and encryption would be required. The idea is that since HTTP/2.0 has many advantages, including being faster, this will drive adoption of good privacy practices even by those who don’t particularly care.

The controversy · The IETF is full of loudmouths, and it is very rare that any new thing, or improvement to an existing thing, sails through without bitter argument. This is as it should be; the Internet has become critically important to human civilization and we should make changes only with extreme caution.

But even by IETF standards, the pervasive-surveillance-pushback debate has been unusually fierce and voluminous.

There are a surprising number of people who, for a surprising number of different reasons, are generally not on board with either of the short-term strategies the IETF is looking at.

It’s not so much that it’s reasonable to be suspicious of people’s motives, it’s that in this scenario, it’s silly not to be. In particular, those who are arguing against privacy measures are subject to a very specific suspicion: That they are on the attackers’ side, either for reasons of principle or because they want to sell surveillance technology. And in fact employees of both the spooks and the surveillance-tech vendors are active in the IETF.

The factions · Suspicion aside, and bearing in mind that in the IETF people are supposed to speak for themselves not on behalf of organizations, and also that opinions are highly fragmented, there are some roughly-identifiable opinion clusters, not organized or anything; but describing them may help people understand what’s going on.

The Privacy Partisans are aggressive about doing whatever’s possible by way of counter-attack, and doing it now. This notably includes engineers from Firefox and Chrome, who say that for HTTP/2.0, they’re just gonna run authenticated and encrypted all the time, whatever anyone says.

The Cynics are unconvinced about the usefulness of the counterattack measures on the table. They think that the technology isn’t good enough, or the secret-key infrastructure is corrupt, or that Google and Facebook and so on should be seen as attackers, or developers are just too lazy and incompetent to get the deployment right.

The Enterpriseys are people who think that surveillance is necessary because there are situations where law or policy require it. Examples include prisons, businesses that want to control their employees’ Net access, and devops folks who want to monitor for malware or do load-balancing.

The Unconvinced just don’t see the need for aggressive privacy protection; they think it’s foolish to apply it to public static brochure-ware, or that it’s unethical to impose encryption on people without asking them, or that it’s insane to try to encrypt the Internet of Things: Printers and toasters and so on.

Disclosure: I’m a privacy partisan. I think the current technology, while imperfect, is good enough to be useful; I don’t want to live on an Internet optimized for prisons and printers; I think it’s perfectly possible to comply with the law and still protect people’s privacy; and I think everything should be private because otherwise things that are private are suspicious. But I understand that there are smart people who disagree, and I respect some (not all) of the people and some (not all) of the arguments.

What’s next? · Beats me. I’m encouraged by the fact that a lot of browser engineers are pretty determined to push the privacy rock up the hill. Even more important, I think I sniff a change in the popular mood, where not every abusive intrusion into ordinary people’s lives can be justified by robotically chanting “9/11”.

But doing privacy right is really, really hard. It would be unsurprising if this effort gets derailed, either by politics or engineering issues. It would be disappointing if the community let a derailment or two put out the fire. I think it may take years of hard work to make any significant improvement. I have zero doubt that the work is worth doing.

If you care about this stuff, and you’re competent with these engineering issues, you really might want to consider joining a couple of these mailing lists or, even better, pitching in on open-source infrastructure.


Comment feed for ongoing:Comments feed

From: Janne (Nov 24 2013, at 18:43)

Is it possible to spoof connections if you have the cooperation of the CA that signed your certificate?

One thing we currently miss, I think, is the easy ability to use self-signed certs; the dire warnings that browsers throw up make them unattractive to use, even though they'll get you most (or all, if the CA is compromised) of the security of "proper" certificates.

It's the difference between being identified or pseudonymous; usually, on the net, a persistent pseudonym is as good as having a proper real-live identity.

Same with web servers: I care less who or what the web server actually is, and care more simply that the server I talk to today is still the same one I talked to yesterday and a week ago. Self-signed certs give you that.


From: Jim Ancona (Nov 24 2013, at 19:28)

I too am appalled by the ubiquitous surveillance that's come to light in the last few months. But it seems to me that if you're going to attribute bad motives to some of the WG members, you really are obliged to name names and make your case. This kind of post is just poisoning the discussion.


From: Grahame Grieve (Nov 24 2013, at 21:53)

I'm fine with requiring TLS for public facing web sites, but http is about far more than that. If http/2 is not limited to public facing websites, then the protocol needs to support plain text, and use of TLS is a policy not a protocol issue.

I'm definitely a cynic when it comes to enforcing TLS using self-signed certificates. The only outcome of that is that we'll pay more tax to pay for the MITM attack that replaces simple eaves-dropping on plain text


From: Leandro (Nov 25 2013, at 04:43)

Privacy Partisans are trying to solve a political/social issue using technology, it doesn't work that way. We can encrypt all traffic between the client and server making it "impossible" to break and still have privacy issues because the server sends all the data to surveillance entities.


From: Rich Salz (Nov 25 2013, at 07:55)

Am I being too sensitive (from years of reading your blog) about the use of enterprisey to mean those who would sell us out, as it were?


From: len (Nov 25 2013, at 08:20)

Chanting 9/11 seems silly. What about:

o The pedophiles using drop boxes. To its credit, Google is going after these folk. To their discredit, they will not face up to the consequences of ad-based piracy.

o Silk Road and it's alleged connection to the inventor of Bitcoin as well as the use of the web to hire contract killers. This fellow was caught because of surveillance.

You are making mistakes in always going to largest cases (surveillance for terrorism) and skipping the run of the mill police work. Also you are overvaluing the technology without regard to the consequences of its ubiquitous application.

I'm a privacy guy. Unfortunately the web culture is pervasively panglossian. I don't have an answer at this point except to say it will be a combination of policy/politics and technology. It will be good if those responsible for either end of that show each other sufficient respect as to the problems they have to solve. Both are difficult.

As the twig is bent.


From: Kevin Reid (Nov 25 2013, at 08:32)

The biggest downside of using HTTPS today is having to use (pay, and rely on the good behavior of) certificate authorities. I hope that always-encrypted-HTTP-2.0 will not mandate them.

Assuming sticking public keys in URLs is not going to happen, then it could at least use a SSH-style trust-on-first-use check as well as something like — both of these are ways to prevent MITMing without involving "trusted" third parties.


From: James Snell (Nov 25 2013, at 09:07)

Tim, I know that this is an issue you feel strongly about but your passion is no excuse for blunder. The use of derogotory humor and "you must be a spy" type allegations to label and marginalize folks who may have a different opinion than you on this particular issue really isn't all that helpful.

I do believe that you omitted one particular faction.. the group of us who feel that the use of TLS ought to be strongly advocated but remain optional and that sprinkling magic TLS pixie dust everywhere isn't the solution that folks like you are making it out to be.. there are deeper, more fundamental technical and policy problems that need to be dealt with. I call this particular faction "The realists who favor informed choice".

And before you question my motives any further than you already have, let me just say that I am, in no way, "on the side of the attackers" and I am not saying anything here on behalf of my employer or anyone else. I am quite certain that I value privacy just as much as you.


From: len (Nov 25 2013, at 13:32)

I wonder if it is worth making the distinctions of "private" and "secure" to make policy more precise and implementations less subject to legalisms? Is this a distinction that makes a difference that makes a difference? Just asking because this is not my area of expertise.

Snowden outed security practices, not systems designed or used to protect privacy. The problem comes down to maintaining reasonable privacy while ensuring reasonable security.

Disclosure: I think Snowden committed treason; some of you want your privacy but fully support a gross security violation. The irony or hubris of that should not be overlooked. So, without going into the good vs evil of that we find ourselves doing this because people use systems to detect security issues on systems where people expect, perhaps naively, that they have a right to privacy. These cannot be the same, thus SIPRNET, etc. They are conflated because of the ubiquity of the system at hand: the internet, itself a system not designed with security OR privacy in mind which was noted some decades ago at the emergence of the web and too lightly glossed over in the gold rush. Oopsie.

A category not listed in the blog is the "Off The Web" crew who might assert that one shouldn't be using the web to transmit information they expect to be secure OR private and their policy should reflect that.

Maybe some who spend time studying IETF protocols, encryption etc. and trying to apply them to privacy could look at policies designed to ensure security, a practice of classification and appropriate marking (see 5200.45). If we are to have some systems based on policy, appropriate marking and remedies based on punishments for violating appropriate marking could be useful.


From: ben (Nov 25 2013, at 14:39)

…What Kevin, Leandro, and len said, every bit.

Now that I'm done with the me-too bit…

I do want to defend the proprietor's refusal to name names, because [1] it's not as if the TF participants don't already know, [2] naming them unambiguously invites all sorts of unpleasantness from Anonymous et al. that can only poison the well for no gain at this stage, and [3] one wrong word maybe gets our proprietor an inconvenient degree of focussed attention from state organs of various kinds. (Off the top of my head I can think of of eight who could feasibly take an interest, if I limit my stream of consciousness to the English-speaking states.)

Instead we get a self-evident statement along with the strong implication that these guys are considered stoolies by all the other participants. Works for me.


From: Jules (Dec 04 2013, at 06:04)

Few thoughts - I think you're off-the-mark with your statement that 'IETF people are supposed to speak for themselves not on behalf of organizations' - in that you're implying some people are speaking on behalf of the NSA, GCHQ, etc, while holding another personal opinion.

It strikes me as highly likely that employees of these organisations believe in what they are doing - and that it is for the greater good.

I also retain a scepticism towards the 'hard privacy' position, based largely on the historical problems of private banking, and the people who pursue it - some of whom are financially backing technological privacy options as they are absolutely fascinated by the idea of being able to further hide their finances from society. (Sorry, I mean 'the State').

There are bad actors on both sides.


author · Dad · software · colophon · rights
picture of the day
November 20, 2013
· Technology (77 fragments)
· · Internet (103 more)
· · Security (33 more)

By .

I am an employee
of, but
the opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.