When a person signs into an app, that’s a transaction, and value is exchanged. Who comes out ahead on the deal?
“Federated login has a clear benefit to the service provider (access to disaggregated user data, particularly that user’s social contacts), but only an ostensible benefit to end users (freedom from having to remember yet another password), so on that level it’s purely a swindle designed to obtain detailed information about a user in return for nothing.”
If that’s true, Federated sign-in in is a raw deal and nobody should ever want to do it. And all over the InterWebz, people who build apps are crying bitter tears at the prospect of never knowing much about the people who sign in.
What gets shared at sign-in? · I’m not talking about data flows in general, I’m talking about exactly what (to use Gary Royal’s phrase) “disaggregated user data” the app you’re signing into gets. It sorts neatly into baskets like so:
The bare minimum · Your “username” (which means email address everywhere but Twitter) and nothing else. Mozilla Persona wires in the assumption that this is all you’d ever share.
Friendly stuff · Mostly about your name and picture, really useful for apps that want to personalize: Work your name and face into their pages. For most of us, this is low-sensitivity stuff. And if your name and picture give away things you don’t want anyone knowing, faking them might be a better option than refusing them.
Marketing inputs · This is the kind of thing that is useful to people who are trying to sell stuff. Probably the key facts here are your age, gender, and neighborhood. At this point we’re getting into information with real hard-cold-cash value and my feeling is that anyone who gets it when you sign in better be offering something real solid value in return.
Social context · This is the most complicated one. The information clearly adds value to some apps — Facebook Connect (now called Facebook Login, by the way) is used by millions every day, and developers are comfy with it — but the decision as to whether it’s appropriate to share depends totally on what it’s going to get used for.
This is the source of the pervasive anxiety created over the past few years by Facebook; the wrong thing being shared with the wrong people, even once, can ruin a trust relationship forever.
Personally, I’m pretty hard-line about this one. I’m currently refusing to update the Android app from my bank, CIBC, because it wants access to my contacts. You know what the right amount of “social” content is in my relationship with my bank? Zero, that’s what.
Cui bono? · Given all that, what about Gary Royal’s claim that this is a one-way benefit that apps get from their users “in return for nothing”? I think it’s at least oversimplified. Because sometimes what you’re giving up isn’t worth that much, and sometimes what you’re getting is.
I’m not saying there aren’t sleazebag apps out there that will vacuum up everything they get and then spam your friends, because there are. But let’s put on our app-builder hats for a moment, assume we’re not sleazebags, and ask: What’s the right thing to do?
Plan A: Full transparency · This is the simplest to understand: At sign-in time, the app puts up a shopping list of all the information it’d like, and the person looking at it gets to sign off on none, some, or all. Then the app gets to decide whether it can operate with what the person offered, and away you go (or not).
I used to think this was the right answer. And a lot of people would like to see this sort of future. The trouble is, there’s a much larger number of people who don’t care, won’t look, and just want to find the “OK” button as fast as possible so they can click it and get on with using the app. It’s worse than that: Data shows that the longer and more complicated the approval dialog, the fewer people read it.
Ethical quandary · So what are you going to do? There are apps out there that ask for way more information than they need because they know that the number of people who just don’t care is higher than the number who’ll blow them off.
And — I’m sorry — full transparency is thus not an ethically satisfying position. Because people deserve protection even if they’re oblivious to the issues around it.
I hear people arguing that the right answer is to never ask for any information until the moment that you need it. Maybe — but I’m nervous about the interrupted user experience, and I’d need to see research data with outcomes.
I don’t know what the right answer is. But there are a few things I believe:
The Persona position — nobody ever needs to know more than your email — is overly restrictive. By a mile.
There are lots of apps which benefit from a social dimension. We’re going to have trouble having a clear-headed conversation about that until the world gets over the Facebook-Connect hangover. And I salute Facebook for turning off the social-by-default information sharing.
Don’t default to giving users detailed laundry lists of what you want. The longer it is, the less likely they are to read it.
Have intelligent, ethical defaults. For example, maybe the default “OK” button releases the Bare-minimum and Friendly-stuff baskets of information above; anything more than that defaults to “No”.
If you are the kind of app that scoops up valuable data “in return for nothing”, your future is limited. So don’t be.