Categories
Articles

Visualising the Draft EU AI Act

I’m hoping to use the EU’s draft AI Act as a way to think about how we can safely use Artificial Intelligence. The Commission’s draft sets a number of obligations on both providers and users of AI; formally these only apply when AI is used in “high-risk” contexts, but they seem like a useful “have I thought about…?” checklist in any case.

The text can be found below, but I’ve been using this visualisation to explain to myself what’s going on. Article numbers are at what I think is the relevant point on the diagram. Comments and suggestions very welcome!

[11/4/22: Added arrows to show that Training and Application have links from/to the outside world]

What the draft Act says (first sentence of each of the requirement Articles):

  • Article 9: A risk management system shall be established, implemented, documented and maintained in relation to high-risk AI systems
  • Article 10: High-risk AI systems which make use of techniques involving the training of models with data shall be developed on the basis of training, validation and testing data sets that meet the quality criteria…
  • Article 11: The technical documentation of a high-risk AI system shall be drawn up before that system is placed on the market or put into service and shall be kept up-to date (further detail in Article 18).
  • Article 12: High-risk AI systems shall be designed and developed with capabilities enabling the automatic recording of events (‘logs’) while the high-risk AI systems is operating. Those logging capabilities shall conform to recognised standards or common specifications (obligations on retention in Article 20).
  • Article 13: High-risk AI systems shall be designed and developed in such a way to ensure that their operation is sufficiently transparent to enable users to interpret the system’s output and use it appropriately.
  • Article 14: High-risk AI systems shall be designed and developed in such a way, including with appropriate human-machine interface tools, that they can be effectively overseen by natural persons during the period in which the AI system is in use.
  • Article 15: High-risk AI systems shall be designed and developed in such a way that they achieve, in the light of their intended purpose, an appropriate level of accuracy, robustness and cybersecurity, and perform consistently in those respects throughout their lifecycle
  • Article 17: Providers of high-risk AI systems shall put a quality management system in place that ensures compliance with this Regulation.
  • Article 19: Providers of high-risk AI systems shall ensure that their systems undergo the relevant conformity assessment procedure in accordance with Article 43, prior to their placing on the market or putting into service.
  • Article 29: Users of high-risk AI systems shall use such systems in accordance with the instructions of use accompanying the systems.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *