Categories
Articles

AI: It’s not (just) what you use…

Allison Gardner’s keynote to the SOCTIM ShareNational conference last week highlighted how using AI responsibly is at least as much about how decisions are made as about the technology itself. Questions of “transparency” often focus on whether the AI is explainable, but how decisions were made – even how a particular problem was identified and chosen as appropriate for an AI solution – need at least as much transparency.

Humans are involved in many decisions: before, during and after the technology is put to work. How the question is framed, which data are made available to answer it, how those data are processed and cleaned, which features are selected, what weightings are given, which algorithms are chosen, what metrics are used to set objectives, how systems are deployed and how their results are evaluated are all human choices that will affect the results. So we need to understand what those decisions were, and why they were made.

Although AI supply chains may be complex – though hardly unique in this – the organisation that decides to deploy AI in a particular situation is responsible for what happens. They should be very wary of “black boxes”, which may conceal too much complexity or too little. A striking example of the latter is an “image analysis” tool that recommended treatment for hip fractures: on investigation it turned out that the patient’s age and whether the image was taken on a portable scanner carried much more weight than any feature of the X-ray. If data analysis finds that those are the key factors, fine, just tell humans that and avoid the expensive tech. Beware of explanations that simply assert “fair” or “accurate”: both have many different mathematical definitions, which are mutually exclusive: a system that is “fair” by one of them is very likely to be “unfair” by another measure. “Accuracy” figures can easily be improved by choosing the most common outcome. Make sure your system implements the one you actually need.

Interfaces are critical: they need to be clear, but avoid encouraging operators to over-rely on them. Finding a way to support operators and supervisors in exercising their professional judgement involves a very narrow line between causing them to doubt that judgement and, on the other hand, dismissing the machine as useless. Interestingly, this is an issue that goes back at least 500 years: the first printed books were quoted as “truth”, even though they were obviously internally inconsistent (for example using the same woodcut illustration for two different medicinal herbs to save money). Interfaces that seem to offer certainty can easily result in a “human in the loophole” situation, where decisions appear to be made by humans, but in practice they always follow the computer. Black boxes that don’t reveal the limits of their own skills are particularly dangerous.

Be realistic about bias. Automated systems will be biased, because they learn from a world that is biased. So make sure you have good processes to detect and correct it when (not if) it does emerge. More broadly, and excitingly: humans, and human-created systems, are biased; machines should be able to explain why they made particular decisions. So if we hold machines to higher standards of fairness, might we be able to use their explanations to learn about our own human biases?

Finally we need to think, and explain, what effects our introduction of AI may have on individuals and across the workforce. If decision-makers come to regard AI as the “expert”, does that limit the incentive, or possibility, to develop their own expertise? Or, perhaps even worse, could encoding current knowledge into an AI even limit the discovery of new knowledge across our domain? We need to find ways to get humans and machines working together – complementing each other – both at the level of individual decisions and in advancing our combined abilities.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *