Categories
Articles

Where is “AI ethics”?

One of the trickiest questions I’m being asked at the moment is about “the ethics of Artificial Intelligence”. Not, I think, because it is necessarily a hard question, but because it’s so ill-defined. Indeed a couple of discussions at Digifest yesterday made me wonder whether it’s the simply the wrong question to start with.

First, on “chatbots”. These use AI – in the form of natural language processing – to provide an additional interface between students and digital data sources. Those may be static Frequently Asked Questions (“when is the library open?”), transactions (“renew my library book”) or complex queries across linked data sources (“where is my next lecture?”). Here the role of the AI is to work out what the student wants to do, translate that to the relevant back-end function and translate the result of that function back into natural language. In these sessions, ethics hardly featured: an interesting point was made that a chatbot should not replace skills – such as navigating and selecting from academic literature – that the student should be learning for themselves; and there was a question whether the right answer to a student trying to work at 3am should actually be “get some sleep”.

Second on the use of student data to provide support and guidance. Here the conversation was almost entirely about ethics: are our recommendations biased? when do predictions become pre-judgements? when do personalised nudges become unethical? if a student has chosen the wrong institution, it is ethical to try to keep them on our register, or should we help them find a better option?

What struck me is that none of these ethical questions change significantly if the actions are done by humans rather than AI. Discrimination is unethical, no matter who/what does it. So maybe they aren’t about “ethical AI” at all, but “ethical behaviour”? It may be that some of the behaviours aren’t actually possible without the use of computers to crunch statistics, so here we’re looking at “AI-enabled ethical questions”. Conversely if we make our AI explainable – which will almost always be a practical necessity in education, where we need to understand predictions if we are going to help students beat them – then AI may actually give us a better understanding of human bias: “AI-illuminated ethical questions”, perhaps.

My talk (“Towards Ethical AI”) on Thursday will sketch a map containing three different kinds of purpose: those that are ethical no matter who/what does them; those that are unethical no matter who/what does them; and those where the human/computer choice actually makes an ethical difference. True “ethics of AI” may only arise in that last group, and it’s much the smallest.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *