Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Articles

Working with non-human intelligence

Today’s expert panel on Data Ethics took a fascinating turn: to consider what a healthy relationship between human and AI would look like. Although we tend to discuss characteristics and affordances of technology, proper use of technology depends on the human side of the partnership, too.

When choosing or using any tool that uses AI, we must remember that it is a tool. A complex one, perhaps, but not significantly more so than a motor car or a medium-sized organisation. To work safely and effectively with those, we need to understand their capacities – what they are and are not suitable for – and be able to detect when things are going wrong. At least that level of understanding should be possible with any AI: vendors who claim “black box” (or “magic”) status are damaging both their product and their customers. For some applications – for example to understand what a student needs to do to improve their predicted performance – we may, of course, need products that let us understand more.

That understanding shouldn’t be limited to what information the AI takes as input; we also need to know what inferences it makes and how it uses them. Those may well be more sensitive for individuals than the raw data, and more damaging for groups and society. Even if an individual is happy to release their own data, we need to understand what impact that might allow the AI to have on others with similar characteristics or behaviour patterns. It occurs to me that this is yet another reason not to base such processing on consent: we can hardly expect individuals to consider the impact on others when making their own personal decisions what to disclose. Privacy is a collective endeavour.

When building an understanding of AI, our human view of the world is unlikely to be a good starting point. Despite analogies such as “Intelligence” or even near-human forms, we are dealing with computers. It occurs to me that science fiction is littered with puzzled robots: we need to remember those and expect our AI’s to seem as weird to us as we seem to them. They don’t know what a student is, or an exam result. It’s all just numbers. They can – indeed probably should – let us know whether the situation in which we place them is one they were designed for. John Naughton calls for machines that know when they don’t know. But in education, maybe a machine that is certain should also raise human doubts: many of us in the working group had followed career paths that might well have been ruled out by a school AI.

Achieving this seems to require improved awareness all round. If users of AI need more awareness of how much their suggestions can be trusted in different circumstances, then vendors of such systems need to be aware of this need and provide documentation and tools to help. And perhaps those procuring AI systems need a deeper level of awareness of the questions to ask and information to expect since – as was pointed out – once you’ve paid for it, it’s hard not to use it.

Many thanks to all those involved – this “catalyst” was definitely changed by the experience.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *