Categories
Articles

What Happens in VR…?

A colleague spotted an article suggesting, among other things, that Virtual Reality could provide a safe space for students to practice their soft skills. This can, of course, be done by classroom roleplay but the possibility of making mistakes that fellow students will remember could well increase stress. This certainly chimes with feedback I received […]

Categories
Articles

Srry, you woke me…

Recently I was in a video-conference where Apple’s “smart” assistant kept popping up on the presenter’s shared screen. Another delegate realised this happened whenever the word “theory” was spoken. It’s close… These events – which I refer to as “false-wakes” – are privacy risk: maybe small, but that depends very much on the nature of […]

Categories
Publications

A Pathway Towards AI Ethics

We can probably agree that “Ethical Artificial Intelligence” is a desirable goal. But getting there can involve daunting leaps over unfamiliar terrain. What do principles like “beneficence” and “non-maleficence” mean in practice? Indeed, what is, and is not, AI? Working with the British and Irish Law, Education and Technology Association (BILETA), Jisc’s National Centre for […]

Categories
Articles

Is “AI bias” an excuse?

Something made me uneasy when a colleague recently referred to “AI bias”. I think that’s because it doesn’t mention the actual source of such bias: humans! AI may expand and expose that bias, but it can’t do that unless we give it the seed. That’s rarely deliberate: we might treat it as a result of […]

Categories
Articles

AI, Consent and the Social Contract

“Consent” is a word with many meanings. In data protection it’s something like “a signal that an individual agrees to data being used”. But in political theory “consent to be governed” is something very different. A panel at the PrivSec Global conference suggested that the latter – also referred to as the “social contract” – […]

Categories
Articles

Ethics + Consultation + Regulation: a basis for trust

A fascinating discussion at today’s QMUL/SCL/WorldBank event on AI Ethics and Regulations on how we should develop such ethics and regulations. There was general agreement that an ethical approach is essential if any new technology is to be trusted; also, probably, that researchers and developers should lead this through professionalising their practice. First steps are […]

Categories
Articles

Chatbots and Voicebots: legal similarities and differences

The EDPB’s new Guidance on Data Protection issues around Virtual Voice Assistants (Siri, Alexa and friends) makes interesting reading, though – as I predicted a while ago for cookies – they get themselves into legal tangles by assuming “If I need consent for X, might as well get it for Y”. We’ve been focusing more […]

Categories
Articles

Hints at ICO approach to AI

It’s interesting to see the (UK) ICO’s response to the (EU) consultation on an AI Act​​​​​​. The EU proposal won’t directly affect us, post-Brexit, but it seems reasonable to assume that where the ICO “supports the proposal”, we’ll see pretty similar policies here. Three of those seem directly relevant to education: That remote biometric identification […]

Categories
Articles

Layers of Trust in AI

This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And […]

Categories
Articles

Black boxes on wheels

Heard in a recent AI conversation: “I’m worried about black boxes”. But observation suggests that’s not a hard and fast rule: we’re often entirely happy to stake our lives, and those of others, on systems we don’t understand; and we may worry even about those whose workings are fully public. So what’s going on? Outside […]