It’s often said that technical people are bad at designing user interfaces. Ken Klingenstein’s presentation at the TERENA Networking Conference reported (and demonstrated) the results when user interface experts looked at the problem of explaining federated login to users. A striking early finding was that even the interfaces users regularly use to login to services such as Google and Facebook leave them uncomfortable and uninformed about what information is actually disclosed and shared: “consent dialogs do not affect users’ understanding or actions”.
A better starting point may instead be to think about who users are and what their concerns are. Academic studies suggest that there are three attitudes to privacy: fundamentalists (about 25% of the population), pragmatists (~57%) and unconcerned (~18%). And that their concerns can be categorised as excessive collection, secondary use, errors, improper access and invasion. Addressing those questions for those groups of users looks like a good way to explain what our systems are (and are not) doing.
Users can be further helped by providing tools that support their intuitions about privacy. This can be surprisingly simple and subtle: a button marked “continue” (with the alternative being “cancel”) is a very obvious invitation to click with no suggestion that there might be consequences. Just changing the labels to “release my data” and “don’t release” turns out to be much more effective in alerting the user that this is a decision they might want to think about. A fascinating paper from CMU’s Cylab discusses these issues, and explains why so many of the warning messages our computers show us are unhelpful or even encourage us to do the wrong thing.
Privacy managing tools should illustrate the benefits of federated access management, in particular moving from disclosure of “identity” – which sounds like “who you are” no matter how much we argue that its technical meaning is different – to disclosure of “attributes” – relevant things about you. But those tools must avoid being perceived, whether the user is a fundamentalist or an unconcerned, as “getting in the way”. Smart use of defaults and visual, rather than text, representations help a lot in this. Text information for those who really want to know the details can be provided as popups or links. That way fine-grained control, both of what attributes are released and how often the user wants to confirm/change their settings, can be satisfied through the same interface as the unconcerneds’ desire to “just get on with it”.
These ideas have been implemented in a pilot “Privacy Lens” interface (based on uApprove), a Cylab demonstration can be downloaded from the conference website. To a technical person it may not look radically different – it’s managing the same information after all – but tests with non-technical users suggest it should be perceived as significantly clearer and more trustworthy, which has to be a good thing. Future research will investigate how Privacy Lens is actually used, and whether tools such as trustmarks and reputation (“based on your choices for that site…”, perhaps) can build further confidence among users.