Categories
Articles

EU AI Act: scope of “education”

The European legislative process on Artificial Intelligence has moved on one step with the Council of Ministers (representatives of national governments) agreeing on their response to the text proposed by the European Commission last year. The main focus of the proposed law is makers of products that use “AI”: where these are designed for a specified list of “high-risk” purposes, the products must be designed and documented according to set rules. Those rules – covering things like risk and quality management, transparency, human oversight and interpretation, logging, accuracy, robustness and security – seem valuable for any AI: the question is when they should be formal, rather than informal, requirements.

The Commission identified education as a field that might contain high-risk applications. Their proposed scope has typically been summarised as “high-stakes assessment”, though the formal specification (para 3 of Annex III) is a bit longer:

Education and vocational training:
(a) AI systems intended to be used for the purpose of determining access or assigning natural persons to educational and vocational training institutions;
(b) AI systems intended to be used for the purpose of assessing students in educational and vocational training institutions and for assessing participants in tests commonly required for admission to educational institutions.

The Council’s text is pretty similar on point (a), but seems to be significantly different in (b):

Education and vocational training:
(a) AI systems intended to be used to determine access, admission or to assign natural persons to educational and vocational training institutions or programmes at all levels;
(b) AI systems intended to be used to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions or programmes at all levels.

Here “assessing students” has been replaced by “evaluate learning outcomes”, with an illustrative example of “steer[ing] the learning process of natural persons”. This feels a lot more like something that would take place during a course, not just at the start or end. Many examples of personalised learning seem quite close to this definition, for example consider an online language course that identifies a student as having difficulty with the past tense and “steers” their revision exercises to focus on that.

Under the Council’s proposal, fitting the Annex III definition isn’t the sole determinant of whether an application needs to demonstrate formal compliance: they have added a final per-application test “of the significance of the output of the AI system in respect of the relevant action or a decision to be taken”. My language tutor might be ruled “purely accessory in respect of the relevant action or decision to be taken and is not therefore likely to lead to a significant risk to the health, safety or fundamental rights” (Article 6(3)). But if the Council’s broadening of scope is intended as I’m reading it, it might be  interesting to consider which processes and decisions within a course might create such risks.

The European Parliament is expected to produce its version of the text in the first quarter of 2023; the three bodies then agree on a final version, which can take months or years. This won’t apply directly in the UK, but if AI products we use are also designed for the European market, we may see the results of the required design processes and documentation.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *