Whether you refer to your technology as “data-driven”, “machine learning” or “artificial intelligence”, questions about “algorithmic transparency” are likely to come up. The finest example is perhaps the ICO’s heroic analysis of different statistical techniques. But it seems to me that there’s a more fruitful aspect of transparency earlier in the adoption process: why was a particular mix of technology, theory and human skill chosen, and what contribution does each of these make to a successful process? Thinking about that might help both deployers of technology, and those it is intended to help, to find better approaches.
Where a process draws insights from existing data there’s also a question about why that particular aspect of the past was considered informative. This doesn’t have to be as fundamental as concerns over ChatGPT’s selection of source material, but can be a helpful reminder of likely limits. If a target measure of student engagement was derived from text-based courses, it’s worth checking whether that measure is also appropriate for more practical activities. Does it still reflect the desired balance of participation and autonomous learning? Or, if our aim is to improve a process, does it make sense to still use data from an older, pre-improved, version of that process to inform our activities?
This sort of transparency seems to add value to another popular idea: “AI registers”. A public explanation of why an organisation decided to use automation in its delivery of services would help me – even as a lapsed mathematician – much more than a statement that it uses “random forest” algorithms. And I’d hope that writing that explanation would help the organisation build confidence in its choices, too.