Draft Identity and Privacy Principles from Government Data Service

The Government Data Service have published draft identity and privacy principles for federated access management (FAM) systems. It’s interesting to compare these with the approach that has been taken by Research and Education Federations to see whether we have identified the same issues and solutions.

The first thing that caught my eye was that the authors seem to share, even exceed, my doubts about whether Consent is the right legal basis for on-line services. Even when users explicitly agree: “We are very concerned that many Users do not know what permissions they have given nor do they read privacy policies of organisations based outside the EEA” (personally, I’d be very surprised if privacy policies inside Europe are any better read!). Since consent has to be informed and freely-given, that suggests that a lot of the “consent” that services currently rely on isn’t actually valid in law. The first principle “User Control” therefore avoids the word “consent” and says instead that users must “approve” any processing of their personal data. However the commentary confuses things by saying that this does actually mean “consent”. The legal commentary gives what I hope is actually the intention – that processing will be based either on consent or the fact that processing is necessary for the purposes of a contract with the user (for necessary processing, information still has to be available, but it’s less critical that every user reads it). Given that the Principles are written in the context of government services, I’m surprised they don’t also mention the justification (provided by both the EU Data Protection Directive and UK Data Protection Act) that processing is necessary to fulfil a legal obligation. Renewing my TV or driving licence and submitting a tax return – uses of current Government on-line identity systems – don’t feel much like contracts to me. Nor do they feel like the sort of Exceptional Circumstances covered by Principle 9 which is the only other place where justifications for processing are introduced.

The second principle, Transparency, is a clear legal requirement, but the commentary makes the good point that it is also an important factor in “engendering trust” among users of on-line systems.

The Principle of Multiplicity isn’t a legal requirement, but can be seen as another aspect of trust-building. The Principle requires that users have a free choice of Identity Provider and can choose multiple Identity Providers if they wish. Service Providers are allowed to insist that the chosen Identity Provider must offer sufficient Level of Assurance for the particular service, but cannot insist on a particular Identity Provider. This seems to be intended to protect users against inappropriate compulsion by Service or Identity Providers to disclose more information than is necessary (Principle 6 on Portability in fact prohibits anyone from compelling disclosure) and also to prevent Service or Identity Providers from collating information about an individual’s use of different services. Research and Education federations have looked at the same problems, but addressed them by assuming that the Identity Provider (typically the user’s university, college or school) is “on their side” and will use technical measures such as unique per-service opaque identifiers to prevent linking by Service Providers and to minimise the information disclosed. The idea of Multiplicity also seems to break down where, as is normal in Research and Education, the Identity Provider additionally provides authoritative attributes about the user: for example that they are a member of the organisation that operates the Identity Provider. For these attributes there only is a single authoritative source – only my university can assert that I am covered by its site licence for on-line content, only the Engineering Council can assert my professional status – so the Principle may need modification for them. I suspect the final clause of the Principle also says more than it intends: “A Service Provider does not know the identity of the Identity Assurance Provider used by a Service-User to verify an identity in relation to a specific service”. If this actually means what it says – that a Service Provider must not know who it is relying on for an Identity Assertion – then the required technology and legal processes are going to be very complex. I suspect the intention is actually that one Service Provider must not be able to find out which Identity Provider I used for other services.

Data Minimisation is another Principle derived directly from law. The rationale also contains another hint that the authors really are thinking of a distinction between processing on the grounds of necessity and processing on the grounds of consent, since it allows a user to “request [an Identity] Provider to hold information beyond the minimum necessary”: in other words to process some information because of necessity and other information because of consent.

Data Quality (Principle 5) looks like a reflection of the legal requirement, but the wording of the Principle seems to allow a user to  do nothing and deliberately leave their information out of date. At least for those Identity Providers who are committed to providing accurate information, I would expect there to be a requirement for the Identity Provider to check accuracy periodically and to warn relying Service Providers where information may be too stale to be relied on for a particular use. Since I commit a criminal offence if I do not update my driving licence details when I move house, I would expect the DVLA at least to want reassurance that address information it received from an Identity Provider had been checked recently.

The portability part of the Access and Portability Principle (Principle 6) implements a proposal that has been suggested for Service Providers in the proposed Data Protection Regulation where it has been noted that it requires a new way of working, and perhaps technology changes, for them. The Principle also applies it to Identity Providers, and apparently to all information they hold, which may involve further technical, process and legal challenges. For example if I decide to transfer my identity information from one provider to another, does the second provider have to rely on the identity verification done by the first one? And if I transfer all an Identity Provider’s records of my activity (which appears to be envisaged by the commentary) then what will be the position if a recipient Identity Provider is required to present them as evidence of something that happened before the transfer? In discussion of lifelong identifiers, Research and Education federations have identified the point of transfer between Identity Providers as an opportunity for loss of identity or masquerading. Since we haven’t yet worked out a robust solution to this problem, it will be interesting to learn if the Government sector have.

The Governance/Certification Principle sets a high standard, that all Service and Identity Providers must be certified, including independent audits of their design and processes. While there has been some discussion of audits in Research and Education federations these have concluded that, other than for services with particularly sensitive or high-value information, the cost of external audits was not justified. Again, this may reflect the fact that our users will normally have a deeper and significantly stronger relationship with their Identity Provider. We have tended to assume that if the organisation’s systems and processes are good enough for the more intense and more sensitive information processing involved in the employee/employer or college/student relationships then they are likely to be more than sufficient for the organisation’s also acting as Identity and Attribute Provider.

The Problem Resolution Principle reflects a concern that as federated identity systems get more complex, it may be hard for the user to work out who they need to contact to resolve a problem. In the Article 29 Working Party’s Opinion 1/2010 on the Concepts of Controller and Processor their solution appeared to be to identify key decision making organisations and place particular responsibility on them (see, in particular, Example 15). The GDS Principles envisage an even more distributed system where there are no such key points of control/responsibility, so instead propose an Ombudsman (or Ombudsmen!) who can require participants to deal with problems. Research and Education systems tend, again, to rely on the close relationship between the user and their “home” organisation and the shared interest they are presumed to have in resolving problems.

The final Principle covers “Exceptional Circumstances”, where processing may take place that is not in accordance with the Principles. This will only be permitted if the processing is authorised by legislation (since the commentary mentions “Parliamentary Scrutiny” I’m not sure whether the intention is to limit this to primary legislation), is linked to one of the justifications for privacy invasion contained in Article 8(2) of the European Convention on Human Rights, and is subject to a Privacy Impact Assessment by all relevant Data Controllers (it’s not clear what will happen if those data controllers do not agree that the Impact is proportionate!). The authors note that law enforcement powers are likely to involve Exceptional Circumstances; another area where problems seem likely is where current powers to disclose information are created by common law, rather than legislation (e.g. Norwich Pharmacal and other production orders). A recent European case has ruled that a Directive requiring information to be kept for law enforcement purposes does not stop that information subsequently being accessed for different purposes under different laws.

Summarising, I don’t think the GDS Principles highlight any new issues that we haven’t considered in designing and linking Research and Education federations. There are some differences between their solutions and ours, but these all seem to arise from the stronger relationship between user and Identity Provider in our case, and the fact that our Identity Providers may also procure services and be authoritative sources of attributes on behalf of their users. Rather than contracts or legal duties arising directly between the user and the service provider, situations such as site licences and professional qualifications mean that service providers often have stronger relationships with the organisation than the individual. In turn, our users have a stronger relationship, and more common interest, with their Identity Providers than the GDS can assume. That gives us alternative ways to protect users’ privacy (one of the main benefits of Federated Access Management is that service providers no longer need to manage accounts and personal details for individual users). However because there may well be no direct contract or legal obligation between the user and service, we have to use a different legal provision (“necessary for the legitimate interests” of the IdP and SP) – which itself contains additional protection of the user’s rights – to justify the personal information that we do process and disclose. Interestingly the new draft Regulation contains a hint of a“contract for the benefit of the individual” (Art 44(1)(c)) which might one day provide a common framework for both types of federated access management system.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *