Categories
Articles

Algorithms: Explanations, Blame and Trust

“Algorithms” haven’t had the best press recently. So it’s been fascinating to hear from the ReEnTrust project, which actually started back in 2018, on Rebuilding and Enabling Trust in Algorithms. Their recent presentations have  looked at explanations, but not (mostly) the mathematical ones that are often the focus. Rather than trying to reverse engineer a neural network, they have been exploring the benefits of clear and coherent messaging about why a task was chosen for automation, what the business models and data flows are.

That chimes with an idea I’ve had for a while about “shared interests”. If the organisation using the algorithm shares my discomfort (or worse) when something goes wrong, then I’m much happier to rely on its judgement. If the relationship is adversarial, where the organisation benefits from things that make me uncomfortable, then I’m much more likely to demand detailed explanations, or simply use opt-outs and technology to obstruct data collection or reduce data quality. Sometimes that undoubtedly makes me a free-rider – receiving benefits of data processing without contributing to it – but that’s the fault of the organisation that failed to explain how its limits of acceptability aligned with mine. If you want me to be altruistic, you have to continually earn it.

And that idea of alignment leads on to another idea about how we relate to our own algorithms. If we want others to behave as if we have made a good choice, then we must behave that way ourselves. And that applies even when things go wrong. If our algorithm responds badly to unforeseen circumstances or exposes unpalatable facts then we, who chose it, must own its behaviour and accept the blame.

Or maybe we can do better? Many years ago when I started working in what’s now called “cyber-security” it was really hard to get organisations to talk about their incidents. It was assumed that security should be perfect and that any breach was evidence of failure. The first breach notification laws were explicitly intended to “name-and-shame“. Now, we’re are a bit more mature: recognising that occasional breaches will happen, and what really matters is rapid detection and effective response. Claims that an organisation or product is unbreachable now cause me to lose trust: even if they have been lucky so far, it suggests that they will be unprepared when something does go wrong. What really builds trust in an organisation’s cyber-security is clear public explanations of what went wrong, what has been done to stop it happening again, and recognition that they aren’t the only (or even main) victim. That might be an interesting model for those working with algorithms, too.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *