You should be able to exchange messages privately using the Internet. My profession should be working on making this easy for everyone, including non-geek civilians who shouldn’t need to understand cryptography.

I’ve been thinking about this a lot and even slinging little bits & pieces of code. This will probably turn into a series; the next piece is Where Is Your Data Safe?

It’d be helpful to define terms. So let’s start with a question: How private do you want to be? There are three obvious levels, which I’ll call Basic, Common, and Strong.

Basic Privacy · We can all agree that we want privacy from random strangers sniffing WiFi signals, from crooks looking for bank account numbers, and from agents of the Chinese government looking for dirt on dissidents. This is your Basic entry-level privacy.

[Geek note]: HTTPS-by-default with good endpoint implementations gets us most of the way on this one.

[For civilians]: You should get Basic Privacy for free wherever you go on the Net. If you don’t, the places you’re going are acting unprofessional.

Common Privacy · The label is chosen to suggest both Common law and “Common sense”.

Normally, you expect that when you shut your front door, what happens behind it is private. But if a government employee gets a judge to sign a warrant authorizing some door-busting, your privacy is over. And many people are OK with that, particularly those with the good fortune to live in civilized countries. Even granting that there are occasional bad-apple cops and rogue judges, the system can on balance work satisfactorily. In a well-run society, this Common level of privacy should meet most reasonable needs.

But, in a time where governments run projects like PRISM and BULLRUN whose goal is essentially to record everything about everybody, and those programs may even be judged legal, it’s reasonable to wonder whether Common Privacy is good enough.

[For civilians]: Your email should be private unless someone with a warrant shows up at your email provider’s office. It shouldn’t be legal, let alone common, for intelligence agencies to vacuum up everything, but empirically, they’re trying hard to do just that.

[Geek note]: Message encryption doesn’t stop government employees if private keys are stored online, and online keys certainly make things more convenient (and convenience is very important).

Strong Privacy · This is the kind of privacy where nobody else, and I mean nobody, can read messages that are meant for you, and furthermore, you can be sure who sent them. Technology can’t break this lock.

[Caveats:] Of course if the spooks really seriously think you’re really seriously bad, they’ve already planted a camera where it can see your screen and keyboard, so encryption is irrelevant. And while they can’t break the lock, they might persuade you to unfasten it, with thumbscrews or the threat of jail time.

On the downside, Strong Privacy makes it harder for the cops to track down the actual bad guys. Harder enough to significantly decrease public safety? The security establishment says so; but then they would, wouldn’t they?

On the upside, it seriously gets in the way of abusive officials. If you live in a place like China or Iran with an oppressive police state, Strong Privacy is a life-and-death (literally) issue. And if you’re in a reasonably civilized democracy but you’re worried it might swerve off the rails, or think parts of it already have, you might want to go with Strong Privacy, just in case.

It’s worth noting that if you have Strong Privacy, you also get Basic and Common Privacy as a side-effect.

[For civilians]: You can have Strong Privacy now, but it requires getting comfy with a few geek incantations and using not-quite-ready-for-prime-time software.

[Geek note]: The Web of Trust idea is stone cold dead, and reasonable people are frightened of CA-based approaches, and Crypto Won’t Save You (mostly) anyhow. But I’m pretty convinced there’s a path open to building increasingly-civilian-usable Strong Privacy.

One take-away · It seems that if our spooks weren’t doing egregiously-intrusive things like PRISM and BULLRUN, it’d be perfectly reasonable for a law-abiding citizen to settle for Common Privacy. But since they are, it’s plausible that they’ll drive the populace into the arms of Strong Privacy, which may, on balance, decrease public safety. Oh well.

Open questions · There are lots: Is Strong Privacy even possible? If so, practical? If so, can it be made routinely usable by civilians? If so, what technology do we need to invent? And one that I care about a lot: Where is your data safe?



Contributions

Comment feed for ongoing:Comments feed

From: Carl Forde (May 26 2014, at 22:33)

"Where is your data safe?" Uhm, what is "my data"? For sake of argument, how about all the bits on my computer(s): laptop, desktop, mobile and attached devices. So when do the bits on my computer become mine? Some vendors, *cough* NetFlix *cough*, might say never. Maybe we can compromise and say that anything saved to persistent storage is mine? If it can't be made mine, then it can't be written to persistent storage. Is that reasonable?

But that's incomplete isn't it. What about my browsing and emailing records? DNS lookups? All that data is about my activities, it is about me. But is it mine? If I send someone an SMS message, or email, I have a copy and so does the recipient. Importantly, also all the servers in between could also store copies. How much of that, if any, belongs to me? Even if no system stores the message text, my phone company and the recipient's phone company log the sender, receiver date and time at least for billing purposes. Above I implied that the data on their systems is theirs. I have the message, sender, receiver, date and time so what need, or right, do I have to that data on the phone company's systems? Superficially, none. So do the service providers have the right to do anything they want with their data about me? No. Because no longer is the question about the safety of my data, the question is now about /my/ safety. Do the service providers have an obligation to maintain my safety? Or at least "Do no harm"?

What does it mean for my data to be "safe"? Safe from what? Unexpectedly disappearing? Corruption? Unexpectedly being duplicated? Focusing on the third one: This is really hard because copying bits is what computers do. Making bits available to be copied is what networks are designed to do. Safe from whom? Myself? Curious neighbours? Script kiddies? Well funded malicious attackers? Local, or remote?

As you suggest, sufficiently motivated attackers can simply go 'out-of-band' to gain access. So then data security becomes part and parcel of ones personal security. Considering data security from only computer and network points of view is fundamentally flawed. Yet there is a great divide between devices and people. I think bridging that gap is where the solution to where my data will be safe will be found.

[link]

From: Dave Walker (May 27 2014, at 02:15)

Interesting open questions; will have a further think on them.

Meantime, I'd recommend taking a look at Michelle Dennedy's new book, "The Privacy Engineer's Manifesto", if you haven't already. The publisher has the very enlightened attitude of making digital copies available free for personal use, too :-).

[link]

From: Julian (May 27 2014, at 07:33)

"Where is your data safe?" is a very good question, and I always like the answer "In a computer without internet", but even this is not safe, as you just demonstrated.

I guess the only safe place will be your memory (although not enough room for all data) :).

[link]

From: PJ (May 27 2014, at 09:00)

Actually, HTTPS *isn't* good enough, given that metadata is data:

1) do you know of an easy way to encrypt your DNS queries or their results?

2) even without watching DNS, watching the endpoint IPs of traffic can be instructive.

3) according to the NSA, the existence of encrypted traffic means there's something worth saving - so they do, by default, save encrypted traffic. I suspect mostly email, but saving every ssh session is certainly not beyond them.

The good news is geeks can get Strong Privacy and even some mild thumbscrew-cryptography-resistant-ness via truecrypt with deniability partitions... presuming their BIOS/UEFI/bootloader haven't been compromised.

[link]

From: Bob Haugen (May 27 2014, at 09:59)

Tim, you wrote: "the Web of Trust idea is stone cold dead".

I guess I have not been keeping up. When did that happen? Why?

(Not that I know anything different, either. Just ignorant.)

[link]

From: Paul Hoffman (May 27 2014, at 10:05)

An additional question for the geeks creating Strong Privacy solitions: can someone using your solution interact with someone using Basic Privacy or Common Privacy? That is, if I'm using a Strong Privacy solution, do I have to change messaging systems to talk to people outside that system?

[link]

From: David (May 27 2014, at 12:44)

"On the down­sid­e, Strong Pri­va­cy makes it hard­er for the cops to track down the ac­tu­al bad guys. Harder enough to sig­nif­i­cant­ly de­crease pub­lic safe­ty?"

No. Not just no, but emphatically no. If the authorities, whomever they may be in a given instance, need information on someone they have reason to believe is doing/about to do bad things, they can get it. Every time. Please don't kid yourself about that.

Strong Privacy will probably limit casual surveillance by government agencies, and may make it harder for non-governmental agencies to obtain large amounts of information about you without your knowledge or permission. Surely neither of these is a bad thing?

[link]

From: Eric A. Meyer (May 28 2014, at 11:46)

“But since they are, it’s plau­si­ble that they’ll drive the pop­u­lace in­to the arms of Strong Pri­va­cy, which may, on bal­ance, de­crease pub­lic safe­ty. Oh well.”

That seems maybe a tad too glib for what is a very serious question. Is Strong Privacy worth more deaths, civilian or otherwise? How many more? How frequently? In what context(s)?

I’m not saying Strong Privacy isn’t worth decreased public safety. I’m saying that you just skated past a really important question there, and it feels dishonest (at a minimum) to do so.

Granted, if Strong Privacy isn’t technically feasible, the ethical question becomes moot. Historically it’s been a bad idea to figure out if you can do something before even asking whether or not you should do it, though.

[link]

From: Matěj Cepl (May 28 2014, at 23:30)

@Bob Haugen: Nothing traumatic happened. Actually, *nothing* happened. Encryption/authentication mechanism is alive (not dead) when I can use it. I can set sending S/MIME signed (not encrypted) as default and not much will happen except most civilized MUAs will happily grunt that the message is signed. Try to do this with PGP. When was the last time you send a signed message to a random stranger with whom you have never met in person before (in which case you could actually use even a symmetric cipher)?

[link]

From: Dewi Morgan (May 30 2014, at 13:20)

PJ writes: "The good news is geeks can get Strong Privacy and even some mild thumbscrew-cryptography-resistant-ness via truecrypt with deniability partitions... presuming their BIOS/UEFI/bootloader haven't been compromised."

Great timing, PJ. Great timing :D [The Truecrypt project shut down in interesting circumstances the very next day]

Carl Forde's points about the fuzzy boundary around what is "your" data is well made.

Encrypting *freaking everything* seems like a damn good start though. Encryption is indeed a Maginot line, but something people forget about the Maginot line is that it worked perfectly at preventing a direct attack. Instead, the Germans had to go around, invading Belgium in order to get to France.

Plaintext allows that effortless direct attack, and should have been dead a long, long time ago: that it is not is a shame on all of us.

[link]

From: B (May 31 2014, at 05:26)

Have you read Eben Moglen's very recent essay on what privacy is?

http://www.theguardian.com/technology/2014/may/27/-sp-privacy-under-attack-nsa-files-revealed-new-threats-democracy

[link]

From: Richard Stallman (Jun 02 2014, at 02:57)

Programs such as the Netflix client, that are designed to restrict what you can do with the data that they operate on in your machine, are malware. See DefectiveByDesign.org.

Naturally, they are programs that users don't have control over, you can't rationally trust them. See

http://gnu.org/philosophy/free-software-even-more-important.html.

[link]

author · Dad
colophon · rights
picture of the day
May 26, 2014
· Technology (90 fragments)
· · Internet (116 more)
· · Security (39 more)

By .

The opinions expressed here
are my own, and no other party
necessarily agrees with them.

A full disclosure of my
professional interests is
on the author page.

I’m on Mastodon!