Categories
Presentations

Care with “Ethics”

I was invited to be a “catalyst” or “provocateur” for a discussion on Data Ethics, hosted by the Institute for the Ethics of AI in Education. Here goes…


This has definitely been my “summer of Ethics”: I’ve read, listened, discussed and learned a lot. Mostly good, but here are four tendencies that concern me.

Don’t ignore the law. Not just (or even mostly) because it is the law, and already restricts many of the activities we criticise as “unethical”. But because it has lots of tools to help build ethical systems. The GDPR concepts of purpose, minimisation, risk (to individuals), transparency and accountability – before, during and after operation – are a really useful framework, even if you don’t believe you are processing personal data. Add in the UN Declaration of Human Rights on the purpose of education – “development of the human personality … strengthening respect for human rights and fundamental freedoms … promote understanding, tolerance and friendship among all nations, racial or religious groups”, and you’ve already covered the content of many “ethics” codes.

Don’t duck your responsibilities. We tend to spend a lot of time discussing recompense, rights of correction and review, personal control, sometimes even “consent” (though rarely in a form the law would recognise). Those are all good, but using them should be the exception, not the norm. In education, we should use the “right of (retrospective) explanation” to help students and teachers beat the prediction, not to submit to it. Rather than relying on disempowered individuals to make hard choices or call out errors, those who have power should focus on not creating problems in the first place (GDPR calls this “accountability”). If it takes a hundred thousand distressed A-Level candidates to discover you chose the wrong definition of fairness then maybe your process is flawed.

Don’t set the objective of “adopting AI”. Look at the problems and missed opportunities you face, discuss with students, teachers and other stakeholders what their priorities are, how they might be addressed, and what trade-offs are involved. By the time you get to the question “would AI help?” you’ll already understand the problem, the challenges and the opportunities a lot better. Close scrutiny of AI options may even identify sources of bias and unfairness in your existing systems. And you stand a much better chance of the community feeling ownership of the solution – if you get it right, how you address problems should be part of why people want to study with you.

Finally. Don’t presume national. The law is national, or even supra-national. But if you think of ethics as the process by which a community decides which lawful things not to do then it’s less obvious that ethics should be. The creation of an institute for “Ethical AI in Education”, implies that “Ethical AI in Business” might be different. I think that’s right, which means we need to look closely when sectors interact: whose ethics are we (implicitly) adopting? And maybe geography matters too. Certainly the three education systems I’ve experienced – in Scotland, England and Wales – have very different historical understandings of what education is for. Those, and perhaps other regional and cultural differences, might well reflect into different ethical priorities and choices. Encoding “ethics of AI” in a single set of rules seems likely to hide those legitimate differences: we also need to consider which are negotiable and within what boundaries, how we negotiate, and how we document that negotiation.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *