Categories
Articles

Framing the Algorithm

A panel on Algorithms at the UK IGF asked whether the summer of 2020 was a catastrophe – “mutant algorithm” having entered political discourse – or an opportunity to work with a population that is now much more aware of the personal significance of the debate? “Transparency” is often cited as a remedy, but we now know that knowing how an algorithm works is far from sufficient: we need to know much more about how and why it came to be doing that job.

Viewing the “algorithm” space as multi-disciplinary, and building on existing work on (open) data, data (science) ethics, and governance/procurement can take us a long way. Indeed, a lot of what is perceived as an “AI Governance/Ethics” is actually data Governance and, as I’ve previously written, is already addressed in tools such as Data Protection Impact Assessments.

On Data, responsible use is the backbone: without trust there will be suspicion, with it there will come licence to innovate. But even perfect data wouldn’t guarantee a perfect outcome: we also need to look at behaviours, understandings, rules. Are we competent users of data, in the sense that we ask the right questions, apply emotional/social/political intelligence and domain knowledge? Are we working within trusted systems – institutions as well as algorithms, trusted as well as trustworthy – does the organisational culture encourage critical opinions, diverse viewpoints? Our processes and tools must be sustainable, in the sense that they respond appropriately to unforeseen situations. It was suggested that the A-level issue was so significant because those exams are so critical to social mobility: a societal issue, not a technological or legal one.

On Ethics, version 3 of the Government’s Data Ethics framework has just been released. This has three Principles – transparency, accountability and fairness – and five detailed Specific Actions – define and understand public benefit and user need, involve diverse expertise, comply with the law, review data quality and limitations, consider wider policy implications. The framework is likely to be used initially as a gateway for individual projects, but is designed to promote organisational change by developing skills, providing feedback and user stories. But, as with data, we need to prepare for imperfection. Algorithms will inevitably reflect existing societal biases: we need to be humble, to accept that, plan to mitigate harms and fix flaws. And be trusted to do so. Part of that is to ask whether data/apps are the right solution for a particular problem: might it be better to just talk?

On Governance inviting help from the public and experts is essential to picking the right problems and solutions, and to building trust in what we do. Here Open Government approaches – deliberation, participation, citizen assemblies, dialogues, citizen participation and feedback – are worth considering. We must achieve shared goals and understanding: applying data or technology to a problem that isn’t agreed or understood is likely to just amplify those disagreements. Definitions are critical: OFQUAL’s algorithm may have been “fair” across cohorts, but amplified social unfairness to individuals. Two practical tools were mentioned: AI Now’s Algorithmic Impact Assessment and AI Global’s AI Design Assistant. Governance must provide effective oversight of the whole life-cycle of a system: information gathering, monitoring/response, review/improvement. When procuring a system or service, ensure that your requirements for values and transparency match what the supplier is offering. Check that their Governance frameworks are compatible with yours. And be clear about the respective responsibilities of supplier and procurer/user.

Finally, a couple of thoughts on Explanation. There’s a useful distinction between “interpretable” systems, where a domain expert can understand what it is doing, versus “explainable” ones, where an individual can ask why it reached that conclusion for them [it occurs to me that I was trying to get at this distinction, without having the terminology, a couple of years ago]. Counter-factuals may be useful for the latter: what would need to change for the result to change? But such a “contrastive” explanation must also be “selective” (a limited number of things) and allow for a “social” dialogue asking what-if questions. And explanation can have a much deeper role: by holding algorithms to a higher standard than human decision-makers, we may learn about our own biases, too.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *