Categories
Articles

Draft AI Regulation: thinking about risks

The European Commission has just published its draft Regulation on Artificial Intelligence (AI). While there’s no obligation for UK law to follow suit, the Regulation provides a helpful guide to risk from different applications of AI, and the sort of controls that might be required.

What “AI” is covered?

According to Article 3(1) [with sub-clauses split out and interpolating Annex I], it’s…

  • software…
  • developed with one or more of:
    • machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning;
    • logic-and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems;
    • statistical approaches, Bayesian estimation, search and optimization methods…
  • that can, for a given set of human objectives…
  • generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

That’s a huge improvement in precision over ‘definitions’ such as “a standard industry term for a range of technologies”. This one does seem like a reasonable basis for regulation: in particular you can imagine different people reaching the same conclusion on whether a given system was, or was not, “AI”. But it’s pretty broad.

So is it all alike?

No. The draft identifies four kinds of purpose that AI technology might be used for. Some purposes have minimal risk, and are not mentioned further. Some have low risk: here the main requirement is to make humans aware when they are being used. Some have high risk, and carry significant obligations for both suppliers and users. Some are unacceptable, and prohibited.

Low risk AI

According to Article 52, there is some risk whenever AI interacts with a human (e.g. chatbots); is used to recognise emotion or assign categories such as age, hair colour, sex, or ethnic origin; or to produce images, audio, video that might appear authentic or truthful (“deep fakes”). Humans must be informed when any of these are used. The introductory text says this lets people “make meaningful choices or step back”: although there’s no right to “step back” in this regulation, that may well arise from the rules for processing personal or special category data under the GDPR.

High-risk AI

These purposes are listed in Annex III, together with any use of AI as part of a safety mechanism in regulated products. Specific to education, paragraph 3 lists:

  • AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions; [and]
  • AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.

These uses are considered high-risk even if AI supports a human decision-maker, thus representing a considerable extension of the GDPR Article 22 provisions on Automated Decision Making.

Other high-risk uses that may be relevant include remote biometric identification and categorisation of natural persons (para 1); recruitment or selection of employment candidates, promotion, termination, task allocation and monitoring of employees (para 4).

Using AI for these purposes may be permitted – subject to prior registration (Art.16) and conformity checks (Art.19) – but there are significant and continuing obligations on both suppliers and users. Suppliers must, for example: continually manage risk from both normal operation and foreseeable misuse (Art.9); comply with requirements on training data (Art.10), technical documentation (Art.11), and provision of logging facilities (Art.12); ensure accuracy, robustness and security, including against feedback loops, data poisoning and adversarial examples (Art.15). They must inform users (i.e. organisations): how to interpret the system’s output and use it appropriately; of situations that may lead to risks to health, safety or fundamental rights; of groups of people who the system is, and is not, designed to be used on; the expected lifetime of the system and ongoing maintenance measures (Art.13). Suppliers and users must work together to: ensure effective human oversight; understand the system’s capacities and limitations; monitor for anomalies, disfunction and unexpected performance (Art.14). Users must keep logs, monitor performance; they must stop using the system and inform the supplier if there is any serious incident or malfunction or if operation presents an unexpected risk (Art.29).

Prohibited AI

Some uses of AI are considered unacceptable (Art.5) because they contravene EU values, such as fundamental rights. These include subliminal manipulation and exploiting individual vulnerabilities so as to distort behaviour and cause harm to them or others; AI-based “social scoring” by public authorities. Real-time remote biometric identification in spaces accessible to the public (which is high-risk in any case) is unacceptable for law enforcement purposes except in certain defined circumstances.

A third way?

Some commentators have seen the draft Regulation as a “third way?” to regulate AI: between free market and complete control. Others have focussed on the bureaucracy required for high-risk applications, or the exemptions for law enforcement. To me, the most interesting thing is how it works together with the GDPR, in particular the Accountability principle. Both require organisations to think carefully about risks to individuals before implementing new uses of data and technology. This AI Regulation actually provides more detailed guidance on those risks. Having heard, just last week, that “most ‘AI ethics’ questions turn out to be ‘data ethics’ ones”, drawing those two strands of closer can only be helpful.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *