It would be useful if you could really trust code running in your browser. It’s not obvious that this is possible; but it’s not obvious that it isn’t, either.
There are two classes of reasons people don’t want to trust browser-based crypto: political problems and technical problems.
The political problem · This is when a government employee shows up at Google or Keybase with a legal document that says “You have to modify your code so that every time they type the password to unlock their secret key, you send us a copy.” There goes your privacy. Don’t think something that could happen? It already has.
This is tough territory; in many apparently-civilized countries, law enforcers can get a warrant that not only requires someone to spill the beans, it requires them to keep the spilling a secret. Which is reasonable on the face of it: If I’m mounting an intercept on a someone I think is smuggling slave laborers or AK-47s, I don’t want the bad guy to find out. It becomes unreasonable when the spooks want to know everyone’s secrets; recently, it seems that that’s exactly what they want.
Countermeasure: Canary doc · Some people have thought out loud about the notion of “Canary documents”. The first example I encountered was a public librarian somewhere in the US who posted a sign saying something along the lines of “So far, no FBI agents have requested information on the library use of any customers.” The idea is, when the sign comes down you should worry.
rsync.net, a secure-offline-backup provider, has a more sophisticated warrant canary; a document, signed with their key, asserting that no warrants have been served on them, containing headlines-of-the-day scraped from a well-known newspaper.
Which is actually pretty clever. Would it work? I doubt it; I am not a lawyer and have no idea whether, in any particular jurisdiction, law enforcers can get a warrant requiring you to lie. But I’m sure that in lots of jurisdictions it’s no problem, and that in even more, they can require rsync.net to turn over the private key, so they can go on publishing the statements whether rsync likes it or not.
Countermeasure: Safer hosting · This is the notion that you locate your business in a jurisdiction where you think it’s less likely that overenthusiastic public-safety officials will decide that their duties require you coughing up everyone’s secrets. Germany and Iceland spring to mind — the USA no longer does, sadly — but I’m no expert in this stuff. Note that it doesn’t really matter much where your servers are; it’s where you live and work and get paid.
The “get paid” part matters and is why this may not be a good solution; if you are offering privacy services as part of a business, it’s hard to make a living while ignoring the United States market. And if officials there don’t like your privacy policies, they can probably just shut your biz there down.
Countermeasure: Verifiable transparency · Take a few minutes and read Keybase’s Server Security document. The cryptographic details are hard to understand (for me too) but the goal is obvious: They’re trying to make it possible to use Keybase without actually trusting it.
Keybase has always done this, to some extent: They offer assertions that the holder of some private key also controls a particular Twitter account and/or GitHub account and/or domain name. You don’t have to believe their assertions because the workings of public-key cryptography mean you can double-check for yourself.
But suppose, once again, that they get a threat (criminal or legal) that forces them to compromise their code with the effect of misleading people or spilling secrets? The keybase server offers a signature chain that allows an external site to verify whether or not any of the assertions have been changed.
Of course, for this to work, the external site has to download and remember the existing signatures in the chain. If even a small number of external sites started remembering Keybase signature chains as a service, my feeling is that this would make subverting the service more or less completely impractical.
So yeah, such an implementation would have to worry a lot about issues of secure delivery and context-driven behavior and runtime malleability and entropy sources and so on. It might not even be possible without some browser modifications, and Matasano makes hay over this one, talking about the crushing weight of people running 2008-vintage browsers.
But even in that worst case, a whole lot of people are running modern browsers that keep themselves up to date reasonably well; and if they’re the only ones that can have in-browser-crypto, that’s way better than nothing.
So, for the sake of this argument, let’s assume Matasano is wrong and good strong in-the-browser crypto is technically possible.
By the way, if (and quite likely when) there is news relevant to whether or not browser crypto is technically feasible, I’ll update this post.
But anyhow, it doesn’t help much without a remediation for the political problem: Someone points a gun at Google’s head and says “Wonderful that you made that end-to-end code safe and reliable. Now make it unsafe and give us the goodies, or it’s jail time for Larry and Sergey.”
So I looked around to see if anyone was doing this. Well, sort of. I include the first thing I discovered mostly for amusement value: Signed Scripts in Mozilla, prominently labeled as probably not updated since 1998, and with references to “Netscape Object Signing” and something called SignTool. All it proves is that at some previous point in history, somebody thought this was a good idea.
People still do, in some contexts: Over at Firefox Marketplace there’s a writeup on Packaged apps, where all the resources come out of a zipfile. It says: “privileged and certified apps are digitally signed to enable the use of privileged and certified APIs. Privileged apps are signed as part of the Marketplace review process, while certified apps are signed by device manufacturers or operators.”
I wondered if there were something similar on the Chrome side. I dug around in the docs and, sure enough, a Chrome extension is signed with the developer’s private key. (I couldn’t find anything similar for Chrome apps as opposed to extensions, but maybe I was just looking in the wrong place).
Versioning · Of course, software doesn’t stand still, if only because we keep finding bugs in it we have to fix. So I think that if we were going to dive into signed-JS, we’d probably want to use a Merkle tree or Hash list or something so that when someone releases version 1.4 of a library, we only need to review the diffs to maintain our confidence.
I don’t see anything here that’s really technically impossible. I hope I’m not missing anything, because the task of making Strong Privacy available to everyone is going to be way easier with decent browser crypto.