AI: thinking about definitions…

To ensure a lively discussion at a recent round-table on AI Ethics participants were asked, provocatively, “was the A Level algorithm fair?”. OK, I can be provoked…

It depends on what you mean by “fair”…

As has been widely discussed, the main objective set for those who designed the algorithm  seems to have been to reproduce the pattern of results that each school obtained in past years. In other words to be “fair” to previous years’ students, who can’t now be compared to a 2020 cohort whose different form of assessment might have resulted in a different pattern of marks.

What does not seem to have been a priority is “fairness” to this year’s students. They were mathematically unable to score better than their predecessors, even if their work might have indicated that they should. The range and distribution of marks within each school had to be the same as before, even if the level of achievement was different.

So, clearly, the definition of “fair” is something we need to discuss and decide on, long before we choose an algorithm, training data, etc.

A related question is whether technology is the “best” way to achieve a purpose. Here, again, thinking about what we mean by “best” can be very informative.

For example, are we trying to do something humans could do, but not at that scale? Or something that humans could do, but at greater cost? Or something that humans can’t do? Or something that humans could do, but not as consistently? As with “fair”, all may be valid choices, but they are likely to have different outcomes. Being clear about that from the start of a development should greatly reduce the risk of misunderstandings, mistakes and miscommunications later on: during development, deployment, operation, and review.

You may have spotted that I said “not as consistently”, rather than “not as fairly”. Creating, and sustaining, an AI that is more “fair” than the context within which it operates is really hard. There are just too many ways that existing unfairness can creep in: even the way we pose the question to be answered may contain implicit assumptions, training sets that do not contain a balanced representation of the population are a well-known issue, but what about unfairness in people’s ability to access the system or to act on its recommendations? That definitely doesn’t mean we should give up on trying to make our systems “fair”: doing so may actually be one of the best ways to highlight those real-world unfairnesses that society needs to address.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *