Using and sharing information can create benefits, but can also cause harm. Trust can be an amplifier in both directions: with potential to increase benefit and to increase harm. If your data, purposes and systems are trusted – by individuals, partners and society – then you are likely to be offered more data. By choosing and using that effectively, you build further confidence that your innovations will be safe and beneficial to others. Those who have data that adds further value are likely to see sharing with you as low risk, those who want access to your data and services may consider that sufficiently valuable to commit to enhanced standards of practice. But if you lose trust, then you’ll get less data; any innovations – even uses that aren’t particularly innovative – are likely to be closely scrutinised. Spirals lead both upward and downward.
But we do need to step onto the spiral. Doing the absolute minimum with data might be “safe”, but it also provides the absolute minimum benefit. Gradually, those who do a little bit more will be more effective, provide more benefit and become more trusted. Those who do a lot more may do even better in the short term, of course, but if they go beyond what their trust or capability will bear they will flip to the downward arm of the spiral and rapidly become the partner and service that no one wants to be involved with. Occasional mistakes may be tolerated: an effective response to them may even enhance trust in the medium and long term. But intentional deception, much less so.
Trust is an essential foundation: without it, data will only be shared when there’s a legal obligation. Those who share data must trust one another; they must also work to make sure their community is trusted by those who do not participate but may influence wider sentiment and attitudes. But, according to a study for the Open Data Institute on the Economic Impact of Trust, to achieve optimal levels of sharing needs more than just trust. We need to be able to find those with whom trusted sharing would increase value, and to establish permanent infrastructures – in the broad sense: all the way from technical, organisational, legal to cultural – to enable the sharing.
But this move from ad hoc to systematic, even automated, sharing is tricky. Fundamentally, it’s likely to involve moving the basis of what we do from inter-personal trust – which we can all understand, but rarely rationalise – to inter-organisational trust – based on agreed standards, as rational as you need, but always likely to be less intuitive or instinctive. That change has benefits, notably when individuals with strong personal trust networks leave. But unless we take care to show how the new system is even more trustworthy than the old, the transition can be a place where doubts arise.
Externally enforced rules and visible sanctions for breaking them can help here. In an interaction sometimes summarised as “you can drive faster if you have good brakes” strong regulation can actually increase confidence and trust in the regulated. If individuals and groups can see innovation being effectively scrutinised by an independent regulator that champions their rights, they may be more willing to trust the innovators. If they can rely on a regulator correcting – or, if necessary, closing down – an unsafe processing ecosystem, individuals are less likely to worry about, or withdraw, their personal participation. Such a regime benefits regulated organisations both by warning if they approach the downward arm of the trust spiral and by providing more stable public sentiment towards information sharing.
However the UK’s proposals for post-Brexit Data Protection law reform have been criticised as involving both broader permissions for industry and weaker regulation. The analysis above should explain the apparent paradox that this might reduce the amount of data made available for innovation, rather than increase it.