A panel on “Building Trust in a Digital Identity” at the UK IGF may have raised more questions than answers, but at least highlighted why doing so is taking so long. Since terminology can be confusing, what was being discussed was how to prove facts about your real-world self to an online service: for example to claim a furlough payment, or to gain access to an age-restricted site.
Why do we want to?
Those two examples immediately throw up the first challenge: why would we want to do this anyway? Digital identities are already used effectively to pay TV licences or tax a car; somewhat less so to pay income tax. But these Government services are things we have to do, not things we choose to. It’s hard to get excited about them, and a successful digital identity (eco)system needs customers to want to use it. Takeup of digital identity in services of choice – mostly in the private sector – has been much slower.
That may be partly a question of what “identity” is actually needed. Government services typically do need to know (at least indirectly) the name and address of “who” they are dealing with. Commercial services may be more interested in “what”: how old or – perhaps an unconscious reference to a Jisc service – whether it has a degree. It’s possible to combine the two functions, but not clear to me how much incentive there has been to do this. And, of course, the more data and functions a service offers, the harder it has to work to maintain trust that those will be protected against misuse. And, although the painful experience of high-quality account linking may be acceptable for a handful of Government services, would someone volunteer to go through that for a dozen or more commercial ones that they may only use occasionally? It would be nice to make the process easier, but that runs the risk of increased fraud.
Who (else) could do it?
Outside Government, the kinds of organisations that know enough about us to make a secure link between real world and online identities may not be the ones we (either as consumers or as service providers) want to rely on. Data brokers and others whose business model relies on data reuse probably aren’t the best foundation for a trusted identity service. Is there a risk that a socially-important function will be cross-subsidised by activities whose lawfulness is questionable; or, that those activities would be legitimised by the parallel use? We need to look very closely at business models and, at least, be prepared to pay enough (either directly, or via increased costs of services) for the identity functions to discourage providers from reusing data in the short term, and to remain in the market (since many service providers are going to rely on them) for the long term. The adtech industry was cited as a warning where both customers and services may be harmed by dependency on a “free” offering.
An opposite problem arises if we rely on device manufacturers to provide identity services. With control over everything from applications down to hardware, these can provide state-of-the-art privacy protection. Indeed one of the complaints about device support for contact tracing apps is that its privacy protection is better than public interests might wish. Here the problem – for both individuals, software developers, and nation states – is control. Who chooses what privacy-protecting identity services exist, and to whom they are made available?
Do we need a market, or a regulated infrastructure?
It’s possible to imagine a world where interoperable, standards-based, identity services compete for business (on functionality, privacy and cost, among other things). But such a market definitely isn’t where we are at present, and it’s not clear that it would ever be stable anyway. Identity services are intermediaries in multi-sided markets, and those gatekeeper functions have a strong economic tendency to tip to a dominant provider (video-conferencing, through which the conference took place, is a rare counter-example!). Should a government be comfortable with a market-dominant Identity service – which, if all goes well, will be a keystone of online society – that may decide to change or withdraw its offering for commercial or geopolitical reasons?
Or should nation states recognise that identity provision is a (inter-)national infrastructure and regulate it as such, for example to mandate access for SMEs? Although this might appear attractive, the history of regulating global tech companies suggests that only the largest states or groupings (e.g. the US or EU) have much chance of enforcing their will. And, while enforcement against economic monopolies has had some success, information monopolies seem much harder to control. Not least because the desired outcomes are much harder to identify and agree.
We must beware of mission creep and perverse incentives. COVID “passports” are a timely example. If employers insist that only proven-healthy workers can return to the office, does this create an incentive to intentionally catch the disease (currently the most likely way to demonstrate immunity)? When I was a child it was normal for parents to try to expose children to mumps, measles and chickenpox, because the likely consequences of infection later in life were much more serious! Or, if a disease turns out to be more prevalent in some communities, does such a requirement create indirect discrimination or reinforce deprivation?
And finally, remember the “three Ds”: devices, documents and disadvantage. If we rely on a portable device to carry our identity, what happens to the 16% of students who do not have a smartphone? Where we need documents to establish even our real-world identity, what happens to people who do not have either a driving licence or a passport (24%, and higher among young people); what alternative proofs might they offer? And how to include those who may be disabled from using these systems, either because of medical or social disadvantage, or for lack of the first two Ds?