Categories
Publications

A Pathway Towards AI Ethics

We can probably agree that “Ethical Artificial Intelligence” is a desirable goal. But getting there can involve daunting leaps over unfamiliar terrain. What do principles like “beneficence” and “non-maleficence” mean in practice? Indeed, what is, and is not, AI? Working with the British and Irish Law, Education and Technology Association (BILETA), Jisc’s National Centre for […]

Categories
Articles

Is “AI bias” an excuse?

Something made me uneasy when a colleague recently referred to “AI bias”. I think that’s because it doesn’t mention the actual source of such bias: humans! AI may expand and expose that bias, but it can’t do that unless we give it the seed. That’s rarely deliberate: we might treat it as a result of […]

Categories
Articles

AI, Consent and the Social Contract

“Consent” is a word with many meanings. In data protection it’s something like “a signal that an individual agrees to data being used”. But in political theory “consent to be governed” is something very different. A panel at the PrivSec Global conference suggested that the latter – also referred to as the “social contract” – […]

Categories
Articles

Ethics + Consultation + Regulation: a basis for trust

A fascinating discussion at today’s QMUL/SCL/WorldBank event on AI Ethics and Regulations on how we should develop such ethics and regulations. There was general agreement that an ethical approach is essential if any new technology is to be trusted; also, probably, that researchers and developers should lead this through professionalising their practice. First steps are […]

Categories
Articles

Chatbots and Voicebots: legal similarities and differences

The EDPB’s new Guidance on Data Protection issues around Virtual Voice Assistants (Siri, Alexa and friends) makes interesting reading, though – as I predicted a while ago for cookies – they get themselves into legal tangles by assuming “If I need consent for X, might as well get it for Y”. We’ve been focusing more […]

Categories
Articles

Hints at ICO approach to AI

It’s interesting to see the (UK) ICO’s response to the (EU) consultation on an AI Act​​​​​​. The EU proposal won’t directly affect us, post-Brexit, but it seems reasonable to assume that where the ICO “supports the proposal”, we’ll see pretty similar policies here. Three of those seem directly relevant to education: That remote biometric identification […]

Categories
Articles

Layers of Trust in AI

This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And […]

Categories
Articles

Black boxes on wheels

Heard in a recent AI conversation: “I’m worried about black boxes”. But observation suggests that’s not a hard and fast rule: we’re often entirely happy to stake our lives, and those of others, on systems we don’t understand; and we may worry even about those whose workings are fully public. So what’s going on? Outside […]

Categories
Articles

That “AI” metaphor…

I’d been musing on a post on how “Artificial Intelligence” can be an unhelpful metaphor. But the European Parliament’s ThinkTank has written a far better one, so read theirs…

Categories
Articles

Algorithms: Explanations, Blame and Trust

“Algorithms” haven’t had the best press recently. So it’s been fascinating to hear from the ReEnTrust project, which actually started back in 2018, on Rebuilding and Enabling Trust in Algorithms. Their recent presentations have  looked at explanations, but not (mostly) the mathematical ones that are often the focus. Rather than trying to reverse engineer a […]