Categories
Articles

Ethical use of AI in HE

Last week I was invited to a fascinating discussion on ethical use of artificial intelligence in higher education, hosted by Blackboard. Obviously that’s a huge topic, so I’ve been trying to come up with a way to divide it into smaller ones without too many overlaps. So far, it seems a division into three may be possible:

  • Ethical purposes. Here there seems the best chance of saying that some purposes are ethical and some are not. The much cited example of a college using data to discourage students who were likely to reduce its position in ranking tables would seem unethical no matter how the decisions were reached. But there are also some purposes where we seem to accept humans doing them but be uneasy about them being conducted solely by algorithms, for example when drones are used for targeted killing, or algorithms used to recommend whether not to send a criminal to jail. So “Why are we doing this?” may divide into three categories: OK, not OK, and decision-by-human.
  • Ethical implementations. It’s straightforward to think of ways of using the results of AI in unethical ways: an automated email saying “YOU’RE GOING TO FAIL!!!” is not going to pass many ethics assessments. However in this case there are situations where a pure-technology solution might actually tick more ethical boxes than a human-mediated one: a student group in the Netherlands concluded that they would prefer gentle reminders – “do a bit more work” – to come solely from a computer and not, provided they took notice, to come to the attention of their tutors at all. We seem to have varying instincts about computers processing our data: in some instances it’s less privacy-invasive than a human doing the same, but where computerisation allows processing that would have been impossible for humans there is a risk that that will be perceived as increasing privacy invasion. “How are we doing this?” is definitely worth reviewing from an ethical perspective.
  • Ethical use of data. Ethical questions arise both out of which data we use to reach decisions, and the characteristics of the algorithms used to make them. However both questions depend a lot on the purpose and the situation. Using location data to help a student find the right lecture theatre or library shelf seems to raise few ethical questions: using the same data to judge whether they are in an appropriate place for Friday-evening study would be much more troubling. The way the data were obtained also need to be considered: a student volunteering the information that they are in the library may be ethically different to the same information being extracted from CCTV or wifi logs. I’ll cover algorithms in the next post, but the situation will also affect the desired characteristics of the algorithm: do we expect to know how it works in advance, or be able to explain it retrospectively, or not care so long as it gives the right result? So “What are we using for this?” seems to raise its own group of ethical issues.

Reviewing those three categories, it strikes me that they are somewhat similar to European law’s requirements that processing of personal data be “legitimate”, “fair” and “necessary”. That may be a good thing or, given the difficulty that legislators and regulators have in keeping those separate, maybe not!

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *