The ICO’s Age Appropriate Design Code (more familiarly the “Children’s Code”) may have been written before lockdown, but it could provide useful guidance to everyone designing or implementing systems for the post-COVID world. We’re all trying to work out what a “hybrid” world should look like, whether in schools, colleges, universities, workplaces or social spaces. […]
Assessment – many ways to do it
Jisc’s 2020 Future of Assessment report identifies five desirable features that assessors should design their assessments to deliver: authentic, accessible, appropriately automated, continuous and secure. Those can sometimes seem to conflict, for example if you decide that “secure” assessment requires the student to be online through their exam, then you have an “accessibility” problem for […]
Onward from Learning Analytics
This morning’s “multiplier event” from the Onward from Learning Analytics (OfLA) project highlighted the importance of human and institutional aspects in a productive LA deployment. They begin at the end – what is the desired outcome of your LA deployment? The answer probably isn’t “a business intelligence report”, and almost certainly not “a dashboard”. Starting […]
Layers of Trust in AI
This morning’s Westminster Forum event on the Future of Artificial Intelligence provided an interesting angle on “Trust in AI”. All speakers agreed that such trust is essential if AI is to achieve acceptance, and that (self-)regulatory frameworks can help to support it. However AI doesn’t stand alone: it depends on technical and organisational foundations. And […]
The Power of “No”
For the past twenty-five years I’ve tried to avoid saying “no”. Whether in website management, security or law, “have you thought of…?” seems much more fruitful. In the short term it lets us discuss alternatives, in the long term it encourages – or at least doesn’t discourage – the questioner to come back. So it […]
Black boxes on wheels
Heard in a recent AI conversation: “I’m worried about black boxes”. But observation suggests that’s not a hard and fast rule: we’re often entirely happy to stake our lives, and those of others, on systems we don’t understand; and we may worry even about those whose workings are fully public. So what’s going on? Outside […]
That “AI” metaphor…
I’d been musing on a post on how “Artificial Intelligence” can be an unhelpful metaphor. But the European Parliament’s ThinkTank has written a far better one, so read theirs…
“Algorithms” haven’t had the best press recently. So it’s been fascinating to hear from the ReEnTrust project, which actually started back in 2018, on Rebuilding and Enabling Trust in Algorithms. Their recent presentations have looked at explanations, but not (mostly) the mathematical ones that are often the focus. Rather than trying to reverse engineer a […]
Bias Bounties
So many “AI ethics frameworks” are crossing my browser nowadays that I’m only really keeping an eye out for things that I’ve not seen before. The Government’s new “Ethics, Transparency and Accountability Framework for Automated Decision-Making” has one of those: actively seeking out ways that an AI decision-making system can go wrong. The terminology makes […]
The European Commission has just published its draft Regulation on Artificial Intelligence (AI). While there’s no obligation for UK law to follow suit, the Regulation provides a helpful guide to risk from different applications of AI, and the sort of controls that might be required. What “AI” is covered? According to Article 3(1) [with sub-clauses […]