Categories
Articles

Thinking about “Privacy in Context” and Access Management Federations

One of the big challenges in designing policies and architectures for federated access management is to reconcile the competing demands that the system must be both “privacy-respecting” and “just work”. For an international access management system to “just work” requires information about users to be passed to service providers, sometimes overseas. The information may be as little as ‘this user has authenticated’, but it will usually include an anonymous ‘handle’ so the service can recognise the same user on future visits, and may sometimes include the user’s real name and e-mail address. Since in some circumstances disclosing that information would definitely be perceived as not respecting privacy, the challenge for a federated access management system seems to be to work out where that line is, and not cross it.

In search of help, I’ve been reading Helen Nissenbaum’s book “Privacy in Context”. To greatly (over-)simplify her model, she proposes that our lives involve many different contexts, each of which has norms for the kinds of information flows that are expected. So the information flows within a family are different from those in a school, or a workplace or a doctor’s consulting room, but in each one we nonetheless have a sense that our privacy is being protected. If, on the other hand, information flows in a way that we don’t associate with that context (either unexpected information, or to unexpected people or in unexpected ways) then we feel that our privacy has been violated because, in terms of the model, “contextual integrity” has been broken.

This does seem to match my instincts and how I see others behave, and explains why my notes on privacy have for a long time said “surprise makes it worse”. The book gives a number of examples where new technologies violate existing contextual norms and thereby cause unease or offence: smartcard systems for toll roads pass more information to more parties than paying cash; Google Streetview (and, I think, paparazzi long lenses) violate the norm that information flow in a public place is symmetric – if you can see me, I can see you. When designing technologies for existing contexts, we need to be aware of any new data flows and be very sure that they support the purpose of the context.

However it seems to me that the model doesn’t yet offer much help with predicting how people will perceive privacy in completely new contexts, or when a technological system covers multiple contexts. Thinking in terms of contexts and norms suggests there could be four ways things could go wrong:

  • The user and service agree on the context and its norms, but one of them breaches the norms. This is the all-too-familar situation where organisations suffer monetary penalties under the Data Protection Act;
  • The user and service agree on the context, but disagree what the norms for that context are. This could easily happen due to cultural differences (for example in Scandinavia individuals’ salaries and taxation are routinely known but in the UK they are widely regarded as private). In the real world it takes a bit of effort to get to a different culture where norms may be different and we are generally aware when we do – hence “when in Rome…” – but on-line it seems to me there is greater risk of tripping over this problem. I suspect these will show up as outraged users with services genuinely puzzled what they did wrong;
  • The user and service disagree on which context applies. I think this is what the book is hinting at by pointing out that “friends” on a social network service may not be “friends” (i.e. covered by the norms of the “friendship” context) in real life. And I suspect this may also explain a lot of the unease and concern about secondary uses of information by on-line services: my norms for the “customer” context may not match the shop’s norms for its “business” context when it comes to gathering and using information about patterns of customer behaviour;
  • And finally there are new services that don’t obviously reproduce an existing real-world context. Here I suspect the book’s model suggests that each party will start with what they feel is the most relevant context, with huge possibilities for confusion and offence as different users and the service provider may choose differently. For example I’m bewildered by Twitter where I am consciously adopting my “speaking at a conference” context (if I wouldn’t say it from a podium then @Janet_LegReg won’t tweet it) but others seem to use “press release” context, “water-cooler” context, “domestic” context, or even all of those at once! What on earth are the norms here and how could a service be designed to respect them?

I’ve a nasty feeling that federated access management may be one of these multi-context systems. Although we are currently developing and using it for research and education, there is interest in working with other types of service (for example with commercial suppliers offering student discounts). And even within R&E there are at least two contexts and three different sets of information flows: student, teacher (asymmetric flows are common in the classroom so the expected information disclosure by the two is different) and researcher. Researcher is a particular puzzle since it appears to require both deep collaboration and intense secrecy, at least before publication.

The book hints at two possible approaches to these kinds of hard question: either treat the system as a neutral infrastructure (like the telephone) and leave it to users to decide what their norms are, or else develop existing norms in a way that promotes the purpose of the context. Unfortunately the first of those seems to rule out active participation by the infrastructure in ensuring that things “just work”, while the latter involves trying to work out and, to some extent, codify into systems’ architectures and policies what real-world context(s) and norms users think/feel they are inhabiting.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *