Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Articles

An (organisational) framework for ethical AI

One striking aspect of the new Ethical Framework for AI in Education is how little of it is actually about AI technology. The Framework has nine objectives and 33 criteria: 18 of these apply to the ‘pre-procurement’ stage, and another five to ‘monitoring and evaluation’.
That’s a refreshing change from the usual technology-led discussions in this space: here it’s almost all about the organisation within which the AI will work. Do we understand our goal in choosing to use AI, is there a sound educational basis for that, what changes will this involve for processes and skills, do we understand the risks, how can we detect and change course if it doesn’t work out? And, equally important, does our supplier understand what we are trying to achieve and commit to supporting our choice of goals and assessment of risks?

Even the seven ‘implementation’ criteria are about process: how can AI be used in assessment to demonstrate skills and support well-being; how can we create safe spaces outside continuous assessment; how can AI help us avoid unfavourable outcomes for individuals; how will we help all stakeholders (students and staff) work effectively and ethically with AI; how will we manage the changes that introducing it should bring?

With this comprehensive understanding of the context we want AI to support and enhance, the actual technology choice should be much simpler. Some technologies (maybe even some applications) will be clearly unsuitable: others will be a good, or perfect, fit. Best of all, we’ll be able to provide the most important explanation for trustworthy AI: why we chose to use it.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *