Categories
Articles

AI, Consent and the Social Contract

“Consent” is a word with many meanings. In data protection it’s something like “a signal that an individual agrees to data being used”. But in political theory “consent to be governed” is something very different. A panel at the PrivSec Global conference suggested that the latter – also referred to as the “social contract” – […]

Categories
Articles

Ethics + Consultation + Regulation: a basis for trust

A fascinating discussion at today’s QMUL/SCL/WorldBank event on AI Ethics and Regulations on how we should develop such ethics and regulations. There was general agreement that an ethical approach is essential if any new technology is to be trusted; also, probably, that researchers and developers should lead this through professionalising their practice. First steps are […]

Categories
Articles

Assessment – many ways to do it

  Jisc’s 2020 Future of Assessment report identifies five desirable features that assessors should design their assessments to deliver: authentic, accessible, appropriately automated, continuous and secure. Those can sometimes seem to conflict, for example if you decide that “secure” assessment requires the student to be online through their exam, then you have an “accessibility” problem for […]

Categories
Articles

Layers of Trust in AI

This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And […]

Categories
Articles

That “AI” metaphor…

I’d been musing on a post on how “Artificial Intelligence” can be an unhelpful metaphor. But the European Parliament’s ThinkTank has written a far better one, so read theirs…

Categories
Articles

Bias Bounties

So many “AI ethics frameworks” are crossing my browser nowadays that I’m only really keeping an eye out for things that I’ve not seen before. The Government’s new “Ethics, Transparency and Accountability Framework for Automated Decision-Making” has one of those: actively seeking out ways that an AI decision-making system can go wrong. The terminology makes […]

Categories
Articles

Draft AI Regulation: thinking about risks

The European Commission has just published its draft Regulation on Artificial Intelligence (AI). While there’s no obligation for UK law to follow suit, the Regulation provides a helpful guide to risk from different applications of AI, and the sort of controls that might be required. What “AI” is covered? According to Article 3(1) [with sub-clauses […]

Categories
Articles

An (organisational) framework for ethical AI

One striking aspect of the new Ethical Framework for AI in Education is how little of it is actually about AI technology. The Framework has nine objectives and 33 criteria: 18 of these apply to the ‘pre-procurement’ stage, and another five to ‘monitoring and evaluation’. That’s a refreshing change from the usual technology-led discussions in […]

Categories
Presentations

Towards Ethical AI

My Digifest talk yesterday developed a couple of ideas on how we might move Towards Ethical AI, at least as that is defined by the EU High-Level Experts Group. First is that three of the HLEG’s four Principles, and at least five of their seven Requirements, look strikingly similar to the requirements when processing personal […]

Categories
Articles

Where is “AI ethics”?

One of the trickiest questions I’m being asked at the moment is about “the ethics of Artificial Intelligence”. Not, I think, because it is necessarily a hard question, but because it’s so ill-defined. Indeed a couple of discussions at Digifest yesterday made me wonder whether it’s the simply the wrong question to start with. First, […]