Categories
Articles

AI, Consent and the Social Contract

“Consent” is a word with many meanings. In data protection it’s something like “a signal that an individual agrees to data being used”. But in political theory “consent to be governed” is something very different. A panel at the PrivSec Global conference suggested that the latter – also referred to as the “social contract” – […]

Categories
Articles

Ethics + Consultation + Regulation: a basis for trust

A fascinating discussion at today’s QMUL/SCL/WorldBank event on AI Ethics and Regulations on how we should develop such ethics and regulations. There was general agreement that an ethical approach is essential if any new technology is to be trusted; also, probably, that researchers and developers should lead this through professionalising their practice. First steps are […]

Categories
Articles

Chatbots and Voicebots: legal similarities and differences

The EDPB’s new Guidance on Data Protection issues around Virtual Voice Assistants (Siri, Alexa and friends) makes interesting reading, though – as I predicted a while ago for cookies – they get themselves into legal tangles by assuming “If I need consent for X, might as well get it for Y”. We’ve been focusing more […]

Categories
Articles

Hints at ICO approach to AI

It’s interesting to see the (UK) ICO’s response to the (EU) consultation on an AI Act​​​​​​. The EU proposal won’t directly affect us, post-Brexit, but it seems reasonable to assume that where the ICO “supports the proposal”, we’ll see pretty similar policies here. Three of those seem directly relevant to education: That remote biometric identification […]

Categories
Articles

Layers of Trust in AI

This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And […]

Categories
Articles

Black boxes on wheels

Heard in a recent AI conversation: “I’m worried about black boxes”. But observation suggests that’s not a hard and fast rule: we’re often entirely happy to stake our lives, and those of others, on systems we don’t understand; and we may worry even about those whose workings are fully public. So what’s going on? Outside […]

Categories
Articles

That “AI” metaphor…

I’d been musing on a post on how “Artificial Intelligence” can be an unhelpful metaphor. But the European Parliament’s ThinkTank has written a far better one, so read theirs…

Categories
Articles

Algorithms: Explanations, Blame and Trust

“Algorithms” haven’t had the best press recently. So it’s been fascinating to hear from the ReEnTrust project, which actually started back in 2018, on Rebuilding and Enabling Trust in Algorithms. Their recent presentations have  looked at explanations, but not (mostly) the mathematical ones that are often the focus. Rather than trying to reverse engineer a […]

Categories
Articles

Bias Bounties

So many “AI ethics frameworks” are crossing my browser nowadays that I’m only really keeping an eye out for things that I’ve not seen before. The Government’s new “Ethics, Transparency and Accountability Framework for Automated Decision-Making” has one of those: actively seeking out ways that an AI decision-making system can go wrong. The terminology makes […]

Categories
Articles

Draft AI Regulation: thinking about risks

The European Commission has just published its draft Regulation on Artificial Intelligence (AI). While there’s no obligation for UK law to follow suit, the Regulation provides a helpful guide to risk from different applications of AI, and the sort of controls that might be required. What “AI” is covered? According to Article 3(1) [with sub-clauses […]