Categories
Articles

Layers of Trust in AI

This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And if those aren’t already trusted it will be much harder – perhaps impossible – to create trust in any AI that is built on them. At the very least, a realistic assessment of how much trust we already have can inform how much of a “trust leap” the introduction of AI might involve.

The first layer is the context within which we work, or propose to act. Are organisations in that field generally trusted to behave responsibly, or are there concerns about hidden agendas and motivations? If you need to establish yourself as the only ethical actor in an unethical field, then do that first before you introduce further technological complexity, which may well be perceived and portrayed as suspicious opaqueness.

Next, since most AI systems will consume data, are our existing practices when handling and using data trusted? If we are seen as behaving responsibly in the ways we have humans collect, process and use data, then carefully introducing AI for the bulk data handling (thereby reducing the amount of access by human eyeballs) could even increase trust. Research by the Open University found that educational institutions generally are trusted by students to use data appropriately and ethically, but there are also stories of students creating their own data segregation, because they were not sufficiently confident of the university’s. We must be careful that introducing AI contributes to the former, rather than the latter.

If this analysis has identified a “trust gap”, that needn’t mean avoiding AI entirely. But while working to strengthen the trust foundations, it will probably be best to stick to low-risk applications of AI, rather than over-loading what trust we have. Interestingly, risk involves the same three factors – technology, data and context. Here, again, the context or purpose and the data we use may well be at least as important than the technology. Natural language processing involves some of the most complex AI algorithms, but many of its useful applications – for example chatbots and subtitling – involve little risk. But an algorithm as simple as a linear regression may be high-risk if used in a context where it influences life-changing decisions, or to process sensitive data.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *