Categories
Articles

AI in Education: is it different?

Reflecting on the scope chosen by Blackboard for our working group – “Ethical use of AI in Education” – it’s worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game. We should be using AI to achieve value for those who use our services, not from them. Similarly, we should be looking to AI as a way to help tutors do their jobs to the best of their ability. AI is good at large-scale and repetitive tasks – it doesn’t get tired, bored, or grumpy. Well-used AI should help both learners and teachers to concentrate on the things that humans do best.

Clearly there are also risks in using AI in education – there would be little for an ethics working group to discuss if there weren’t! The technology could be deployed for inappropriate purposes or in ways that are unfair to students, tutors, or both. The current stress on using AI only to “prevent failure” feels a bit close to these lines: if we can use AI to help all students and tutors improve then they won’t presume that any notification from the system is bad news. Getting this right is mostly about purposes and processes. However there’s also a risk of AI too closely mimicking human behaviour: poorly-chosen training sets can result in algorithms that reproduce existing human and systemic pre-conceptions; too great a reliance on student feedback could result in algorithms delivering what gives students an easy life, rather than what will help them achieve their potential. An AI that never produces unexpected results is probably worth close examination to see if it has fallen into these traps.

Computers work best when presented with clear binary rules: this course of action is acceptable, that one isn’t. However the rules provided by the legal system rarely provide that. Laws are often vague about where lines are drawn, with legislators happy to leave to courts the question of how to apply them to particular situations. As Kroll et al point out, when laws are implemented in AI systems, those decisions on interpretation will instead be made by programmers – something that we should probably be less comfortable about (p61). Conversely, laws may demand rules that are incomprehensible to an AI system: for example European discrimination law prohibits an AI from setting different insurance premiums for men and women even if those are what the input data demand. Finally, and particularly in education, we may well be asking AI systems to make decisions where society has not yet decided what actions are acceptable: how should we handle data from a student that tells us about their tutor or parent? when is it OK to for charities to target donors based on their likely size of donations? when should a college to recommend an easier course to a borderline student?

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published.