Heard in a recent AI conversation: “I’m worried about black boxes”. But observation suggests that’s not a hard and fast rule: we’re often entirely happy to stake our lives, and those of others, on systems we don’t understand; and we may worry even about those whose workings are fully public. So what’s going on?
Outside our houses, motor cars are probably the most dangerous thing that most of us interact with on a regular basis. But, unless you’re a specialist engineer, how a modern car works is almost certainly completely opaque. Indeed, the tone of articles when security vulnerabilities are discovered suggests we’d like it to stay that way: too much visibility inside the box may be more alarming than too little.
We may be reassured by the presence of expert examiners, with legal powers. Any car more than three years old shouldn’t be on the UK’s public roads if it hasn’t had an annual inspection. Systemic interference with that process – even on an issue not directly related to safely – did cause major public concern. But even those inspectors don’t routinely carry out “white box” inspections on our behalf: much of what they do is still limited to examining inputs and outputs, not how those are linked inside the system.
And full “white box” transparency probably isn’t satisfactory, either. It may even create what Edwards and Veale refer to as the “transparency fallacy”, overloading individuals with information without giving them any meaningful ability to act on it. The notorious trolley problem suggests that even if we could be told, in advance, exactly how a self-driving car would respond in every possible scenario, even non-engineers would want to know why those particular trade-offs and choices were made. Questions like “what’s your business model?”, “what training data, limitations and monitoring have you designed for?” (a recent ACM article pointed out that training on European and American roads may be poor preparation for those in the rest of the world), even “why replace humans at all?”.
So it seems that the transparency, or otherwise, of the box may be less important than the transparency of the decision-making. Neither “black box” nor “white box” should be a way to escape accountability. Those who choose either model to develop or deploy should expect to have to explain and justify their choices.