Categories
Articles

Explaining AI algorithms

One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week’s Ethical AI in HE meeting revealed several different options:

  • When we are making decisions such as awarding bursaries to students, regulators may well want to know in advance that those decisions will always be made fairly, based on the data available to them. This kind of ex ante explainability seems likely to be the most demanding, probably restricting the choice of algorithm to those using known (and meaningful to humans) parameters to convert inputs to outputs;
  • Conversely for decisions such as which course to recommend to a student, the focus is likely to be explaining to the individual affected which characteristics led to that decision being reached. Here it may be possible to use more complex models, so long as it’s possible to perform some sort of retrospective sensitivity analysis (for example using the LIME approach) to discover which characteristics of the particular individual had most weight in the recommendation that was provided for them;
  • A variant of the previous type occurs where a student’s future performance has been predicted and they, and their teachers, want to know how to improve it. This is likely to require a combination of information from the algorithm with human knowledge about the individual and their progress;
  • Finally there are algorithms – for example deciding which applicants are shown social medial adverts – where the only test of the algorithm is whether it delivers the planned results and we don’t care how it achieved that.

Explainability won’t be the only factor in our choice of algorithms: speed and accuracy are obvious other factors. But it may well carry some weight in deciding the most appropriate techniques to use in particular applications.

Finally it’s interesting to compare these requirements of the educational context with the “right to explanation” contained in the General Data Protection Regulation and discussed on page 14 of in the Article 29 Working Party’s draft Guidance. It seems that the education’s requirements for explainability may be significantly wider and more complex.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *