Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Articles

Change: A Feature, not a Bug

Reading the Machine Learning literature, you could get the impression that the aim is to develop a perfect model of the real world. That may be true when you are trying to distinguish between dogs and muffins, but for a lot of applications in education, I suspect that a model that achieved perfection would be a sign of failure.

That’s because our models are often part of a process designed to change the real world. We use analytics to understand how to teach and learn better: students should be enabled and encouraged to beat the model. Even applications like the Graide feedback tool should, over time, result in students needing different feedback, by helping tutors explain hard concepts better the first time around. At the simplest level, that means we should be updating our models frequently, and limiting the age of data we include in them as it will – by design – go out of date.

But the safety-critical world may provide higher-level guidance. Here change is considered inevitable, and something that systems must plan for. An intriguing example from a paper on using safety-critical thinking to inform AI design is that operators will always find different ways to use systems from how their designers intended. That’s not a bug, it’s a feature. The operators’ approach may well work better in the real world; the real world may itself have changed from the design. Systems – technical, organizational, and human – within which Artificial Intelligence sits must detect those changes, ensure that they cannot produce unsafe outcomes, and work out what can be learned from them. If a change makes the system better, we should ensure it is widely adopted: if it highlights a problem with the system (for example that use as designed is inefficient or inconvenient) then the system needs to be improved.

Above all, the reality of change must be part of the culture around AI. A human finding a different way to use it must not be considered a “problem” or “at fault”. They are using human creativity to identify either an opportunity or a risk: both should be welcomed, even encouraged. A system that doesn’t cope with change is a problem.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *