Categories
Presentations

Towards Ethical AI

My Digifest talk yesterday developed a couple of ideas on how we might move Towards Ethical AI, at least as that is defined by the EU High-Level Experts Group.

First is that three of the HLEG’s four Principles, and at least five of their seven Requirements, look strikingly similar to the requirements when processing personal data under the GDPR. That shouldn’t be a surprise: GDPR has long been recognised as more than just a “privacy law”. But it does suggest that applying GDPR concepts and guidance – Accountability/Transparency, Purpose, Fairness/Accuracy/Impact, Minimisation/Security – even when we aren’t processing personal data may help us to be perceived as behaving ethically.

That leaves three areas:

  • Respect for Autonomy: which the HLEG explain as making sure that AI augments, complements, and empowers humans. In Q&A I borrowed from Priya Lakhani’s keynote to explain that as recognising that “Artificial Intelligence” and “Human Intelligence” aren’t, and shouldn’t try to be, the same. Autonomy, then, is about finding the proper roles for AI and HI, such that they can complement each other;
  • Diversity/non-Discrimination: in its impact on groups and society (GDPR should already have taken care of the impact on individuals) and
  • Societal/Environmental Wellbeing. As I realised earlier in the week, a significant amount of these two is actually “ethics”, it’s not specific to AI. Discriminating against groups (or individuals) is likely to be unethical, whether it’s done by a human, an AI or, come to that, Mechanical Turk.

So before getting into questions of “AI ethics” we should probably start by working out whether the actions are considering should be done at all. Here I suggested another quick tool: four questions to help explore new ideas for both feasibility and stakeholder reaction.

Since I make no claim to be an ethicist, I think this is about as far as I can take my journey “towards” Ethical AI. There remain two important questions:

  • If we have concluded that a thing should be done, is there an ethical reason to distinguish between it being done by humans or being done by AI? Here there does seem to be a general feeling that high-stakes decisions should be taken by humans: perhaps with AI assistance, but not by machines alone. I was also reminded of an instance where students wanted gentle “you need to do more work” reminders to come from computers and not their tutors, concerned that if tutors knew how much prompting they had needed, they might consciously or sub-consciously reflect this in their marks; and
  • What do we do if two ethical principles conflict? It’s easy to imagine (see many of Isaac Asimov’s Laws of Robotics stories!) scenarios where Respect for Autonomy might conflict with Prevention of Harm. And I suspect the same applies to most, or all, of the other pairs.

For those I think you do need an ethicist. Or, at the very least, a representative and thoughtful group of stakeholders.

The good news is that I think we can wait for that. Treating the appearance of either of those questions as a “no”, at least for now, doesn’t seem to limit the potential for using AI in education very much. There are still lots of applications waiting to be discovered, developed or delivered.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *