Reading the Machine Learning literature, you could get the impression that the aim is to develop a perfect model of the real world. That may be true when you are trying to distinguish between dogs and muffins, but for a lot of applications in education, I suspect that a model that achieved perfection would be a sign of failure.
That’s because our models are often part of a process designed to change the real world. We use analytics to understand how to teach and learn better: students should be enabled and encouraged to beat the model. Even applications like the Graide feedback tool should, over time, result in students needing different feedback, by helping tutors explain hard concepts better the first time around. At the simplest level, that means we should be updating our models frequently, and limiting the age of data we include in them as it will – by design – go out of date.
But the safety-critical world may provide higher-level guidance. Here change is considered inevitable, and something that systems must plan for. An intriguing example from a paper on using safety-critical thinking to inform AI design is that operators will always find different ways to use systems from how their designers intended. That’s not a bug, it’s a feature. The operators’ approach may well work better in the real world; the real world may itself have changed from the design. Systems – technical, organizational, and human – within which Artificial Intelligence sits must detect those changes, ensure that they cannot produce unsafe outcomes, and work out what can be learned from them. If a change makes the system better, we should ensure it is widely adopted: if it highlights a problem with the system (for example that use as designed is inefficient or inconvenient) then the system needs to be improved.
Above all, the reality of change must be part of the culture around AI. A human finding a different way to use it must not be considered a “problem” or “at fault”. They are using human creativity to identify either an opportunity or a risk: both should be welcomed, even encouraged. A system that doesn’t cope with change is a problem.