Categories
Articles

Swaddling AI

I’ve been reading a fascinating paper on “System Safety and Artificial Intelligence”, applying ways of thinking about safety-critical software to Artificial Intelligence (AI). Following is very much my interpretation: I hope it’s accurate but do read the paper as there’s lots more to think about. AI is a world of probabilities, statistics and data. That […]

Categories
Articles

Visualising the Draft EU AI Act

I’m hoping to use the EU’s draft AI Act as a way to think about how we can safely use Artificial Intelligence. The Commission’s draft sets a number of obligations on both providers and users of AI; formally these only apply when AI is used in “high-risk” contexts, but they seem like a useful “have […]

Categories
Articles

Voice Processing: opportunities and controls

We’ve been talking to computers for a surprisingly long time. Can you even remember when a phone menu first misunderstand your accent? Obviously there have been visible (and audible) advances in technology since then: voice assistants are increasingly embedded parts of our lives. A talk by Joseph Turow to the Privacy and Identity Lab (a […]

Categories
Articles

Automating Digital Infrastructures

Most of our digital infrastructures rely on automation to function smoothly. Cloud services adjust automatically to changes in demand; firewalls detect when networks are under attack and automatically try to pick out good traffic from bad. Automation adjusts faster and on a broader scale than humans. That has advantages: when Jisc’s CSIRT responded manually to […]

Categories
Articles

What Happens in VR…?

A colleague spotted an article suggesting, among other things, that Virtual Reality could provide a safe space for students to practice their soft skills. This can, of course, be done by classroom roleplay but the possibility of making mistakes that fellow students will remember could well increase stress. This certainly chimes with feedback I received […]

Categories
Articles

Srry, you woke me…

Recently I was in a video-conference where Apple’s “smart” assistant kept popping up on the presenter’s shared screen. Another delegate realised this happened whenever the word “theory” was spoken. It’s close… These events – which I refer to as “false-wakes” – are privacy risk: maybe small, but that depends very much on the nature of […]

Categories
Publications

A Pathway Towards AI Ethics

We can probably agree that “Ethical Artificial Intelligence” is a desirable goal. But getting there can involve daunting leaps over unfamiliar terrain. What do principles like “beneficence” and “non-maleficence” mean in practice? Indeed, what is, and is not, AI? Working with the British and Irish Law, Education and Technology Association (BILETA), Jisc’s National Centre for […]

Categories
Articles

Is “AI bias” an excuse?

Something made me uneasy when a colleague recently referred to “AI bias”. I think that’s because it doesn’t mention the actual source of such bias: humans! AI may expand and expose that bias, but it can’t do that unless we give it the seed. That’s rarely deliberate: we might treat it as a result of […]

Categories
Articles

AI, Consent and the Social Contract

“Consent” is a word with many meanings. In data protection it’s something like “a signal that an individual agrees to data being used”. But in political theory “consent to be governed” is something very different. A panel at the PrivSec Global conference suggested that the latter – also referred to as the “social contract” – […]

Categories
Articles

Ethics + Consultation + Regulation: a basis for trust

A fascinating discussion at today’s QMUL/SCL/WorldBank event on AI Ethics and Regulations on how we should develop such ethics and regulations. There was general agreement that an ethical approach is essential if any new technology is to be trusted; also, probably, that researchers and developers should lead this through professionalising their practice. First steps are […]