Categories
Articles

AI Regulation – not so new?

Looking at discussions of Regulating Artificial Intelligence it struck me that a lot isn’t new, and a lot isn’t specific to AI. Jisc already has a slightly formal Pathway document to help you identify issues with activities that might involve AI. But here are some topics that seem to often come up in those discussions. Thinking about these, or even realizing you already have thought about them, might reassure you that just because something has the marketing label “AI”, it might not be either as new or as uncertain, as you thought.

Context. Rather than the technology, think about the situation and process in which you are proposing to use it. Is it a situation where human empathy is critical, or is it more important that actions and decisions reflect what the data and statistics tell us? Make sure systems and processes bring components together in an appropriate way.

Bias. If a situation does involve data, do you understand the characteristics of what you have, and the effects of how you might use it? Biased data and processes may be most obvious when they result in discrimination, but data quality and meaning can also be affected by different learning or teaching styles, access to systems or equipment. That may not be a bad thing (focused actions may be what we want), so long as we understand what those effects are and can justify and account for them. But if data or actions exclude certain groups or situations, this should be deliberate, not accidental.

Where data relate to individuals, the Information Commissioner has already published comprehensive guidance on issues likely to arise with “AI” tools and approaches.

The term “Artificial Intelligence” creates a high risk of different kinds of (self-)deception. Just because something can communicate in natural language doesn’t mean it is human, has any other human attributes or understands the sequence of letters it produces; just because something looks like a photograph or video doesn’t mean it actually happened. Think whether the context around your technology is likely to encourage this kind of misunderstanding: most AI Principles require that technology must declare itself, but that doesn’t always seem to be effective.

Finally, an area that does need new thinking is where technology replaces a human that has a particular legal role, presumptions and responsibilities. Non-human “drivers”, “authors”, “performers”, etc. leave gaps in existing legal frameworks that could produce a nasty surprise. Rather than grand “AI laws”, however, these typically need specific solutions, maybe in the form of interpretive guidance (“authors’ legal rights pass to X”) rather than laws. The EU’s proposal on AI liability is an interesting approach: essentially suggesting a starting point for discussions of where displaced liabilities might land.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

One reply on “AI Regulation – not so new?”

Leave a Reply

Your email address will not be published. Required fields are marked *