Categories
Articles

Is “AI bias” an excuse?

Something made me uneasy when a colleague recently referred to “AI bias”. I think that’s because it doesn’t mention the actual source of such bias: humans! AI may expand and expose that bias, but it can’t do that unless we give it the seed. That’s rarely deliberate: we might treat it as a result of “how the world is”. But maybe we should be using “AI bias” less as an unconscious excuse and more as a sign that something about that world is wrong, and needs fixing?

Some alternative terms I find useful when thinking about that:

  • Learned bias: even if we could provide AI with a perfect view of the human world, that world contains biases. And the way we choose what view to give the AI may introduce more. A good example of this is the “gendered pronoun” issue: AI that is given a corpus of human text to read will encounter the phrases “she is a CEO”/”he is a nurse” less often than their gender-flipped counterparts. So when translating from one of the many languages that doesn’t have gendered pronouns – such as Turkish, Finnish or Persian – an AI is likely to pick the more common version. Heroic efforts are being made to detect and fix this issue in translation sources and algorithms. But it would be good to try to work on the underlying issue, too.
  • Blind-spot bias: AI feeds on the data we give it, but what about the data we can’t give it, because we don’t have it? Are the white spaces on our maps of city air quality because there is no pollution, or because there are fewer smartphone owners there to report it? We need to learn from Abraham Wald’s “returning bombers” paradox and investigate those blind-spots at least as intensely as the data-rich ones. The blind-spot may, itself, be telling us something important.
  • Design bias: sometimes bias doesn’t come from the data, it comes from oversights or assumptions during the design of the AI or its surrounding processes. We need to try to detect and question these. Do all students actually have a smartphone? Are skin-tones generally lighter than the background? I know it’s much easier for me to remember that about 1 in 20 people don’t have “normal” colour vision: those who do can try to keep us in mind, but making your design team as diverse as possible can reduce the amount of “others’ shoes” effort that’s needed.
  • Incentive bias: it’s all too easy for a “goal” to slip into “bias”. Social media analysts noticed a long time ago that the goal of “increase attention” was closely linked to the bias that will “increase outrage”. Hence the algorithms that curate for the former are likely to promote extreme expressions and views. It’s not yet clear whether a different definition of “attention” can fix this problem, or whether the AI will always (re-)discover the human weakness for strong emotions. Perhaps in these stressful times there might be a market for a social media platform that delivered attention through calming, mindful moments? I’d certainly be willing to give it a try.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *