Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Articles

Transparency: about Choices, not just Algorithms

Whether you refer to your technology as “data-driven”, “machine learning” or “artificial intelligence”, questions about “algorithmic transparency” are likely to come up. The finest example is perhaps the ICO’s heroic analysis of different statistical techniques. But it seems to me that there’s a more fruitful aspect of transparency earlier in the adoption process: why was a particular mix of technology, theory and human skill chosen, and what contribution does each of these make to a successful process? Thinking about that might help both deployers of technology, and those it is intended to help, to find better approaches.

Where a process draws insights from existing data there’s also a question about why that particular aspect of the past was considered informative. This doesn’t have to be as fundamental as concerns over ChatGPT’s selection of source material, but can be a helpful reminder of likely limits. If a target measure of student engagement was derived from text-based courses, it’s worth checking whether that measure is also appropriate for more practical activities. Does it still reflect the desired balance of participation and autonomous learning? Or, if our aim is to improve a process, does it make sense to still use data from an older, pre-improved, version of that process to inform our activities?

This sort of transparency seems to add value to another popular idea: “AI registers”. A public explanation of why an organisation decided to use automation in its delivery of services would help me – even as a lapsed mathematician – much more than a statement that it uses “random forest” algorithms. And I’d hope that writing that explanation would help the organisation build confidence in its choices, too.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *