Categories
Articles

AI and Ethics: GDPR and beyond

The EU High-Level Expert Group’s (HLEG) Ethics Guidelines for Trustworthy AI contain four principles and, derived from them, seven requirements for AI systems. The Guidelines do not discuss the need for AI to be lawful, but the expansion of Data Protection law beyond just privacy into areas formerly considered part of Ethics means that much useful guidance can, in fact, already be obtained from legal and regulatory sources. In particular the General Data Protection Regulation (GDPR) principles of Accountability and Fairness require steps to be taken to identify and address potential harms before any system is developed or deployed, rather than relying on remedies after the event. Such an approach is helpful even for systems that do not process personal data.

Indeed it appears that any AI that addresses the requirements of the EU GDPR will already have made significant progress towards achieving the HLEG’s principles and requirements. This analysis considers each of those principles and requirements, identifies relevant GDPR provisions and guidance and, in italics, areas where ethics requires going significantly beyond the GDPR. For each section we also note any issues, or guidance, specific to the use of AI in education. These paragraphs, in particular, will be updated as I discover new, or better, sources.

The major outstanding questions appear to be in the areas of Respect for Human Autonomy and Societal and Environmental Wellbeing, where questions such as “should we do this at all?” and “should we use machines to do this?” are largely outside the scope of Data Protection law. In the areas of Human Agency and Transparency, an ethical approach may provide a better indicator of social and individual risk than the GDPR’s “automated decisions with legal consequences”, which can be criticised as being both too wide and too narrow.

[UPDATE 8/1/21: I’ve been using four questions – Will it help? Will it work? Will it comfort? Will it fly? – to explore some of these “should we do this?” issues. Another post has a brief introduction to how this might work: for more detail, with practical examples, the Journal of Law, Technology and Trust published a short paper “Between the Devil and the Deep Blue Sea (of Data)“]

HLEG Principles

The HLEG derives its four principles for trustworthy AI from fundamental rights set out in human rights law, the EU Treaties and the EU Charter. For the education context, we should therefore add the right to education set out in those documents and, in particular, the broad purpose of education set out in the UN Declaration of Human Rights: “the full development of the human personality”. UK law also expects educational institutions to support the rights to free speech and free assembly, both of which could be impacted by inappropriate uses of AI.

As was suggested at a recent Westminster e-Forum event, organisations that can show that their activities are governed according to such principles are likely to be granted more trust, and greater permission to innovate.

Respect for Human Autonomy

HLEG: “Humans interacting with AI systems must be able to keep full and effective self-determination over themselves, and be able to partake in the democratic process. AI systems should not unjustifiably subordinate, coerce, deceive, manipulate, condition or herd humans. Instead, they should be designed to augment, complement and empower human cognitive, social and cultural skills”.

This is largely a question of the uses to which AI may be put: the prohibitions appear to define unethical behaviour whether it is implemented using artificial or human intelligence. The use of AI may, however, have an amplifying effect, either by making these unethical effects on individuals more intense, or by affecting more individuals. One specific type of “deception” (also considered by the HLEG as a transparency issue) is addressed by the GDPR: that individuals should always be aware that they are dealing with an AI rather than a human and, in most cases, should have the option of reverting to a human decision-maker.

For education, the UN goal of developing the human personality appears a useful test that both supports the HLEG’s desired effects of AI and warns against the prohibited effects, whether intended or consequential. The All-Party Parliamentary Group on Data Analytics recommends that “data should be used thoughtfully to improve higher education”, and cites the Slade/Prinsloo principle “Students as agents: institutions should ‘engage students as collaborators and not as mere recipients of interventions and services'”.

Prevention of Harm

HLEG: “AI systems should neither cause nor exacerbate harm or otherwise adversely affect human beings”.

Assessment and management of harms to individuals is a key part of the GDPR’s balancing test and accountability principle. The ethical viewpoint requires this to be broadened to collective and intangible harms, such as those to “social, cultural and political environments”. The Information Commissioner’s Guidance on AI and Data Protection suggests that consultation with external stakeholders as part of the Data Protection Impact Assessment (DPIA) may provide some check against these harms.

Fairness

HLEG: “fairness has both a substantive and a procedural dimension. The substantive dimension implies a commitment to: ensuring equal and just distribution of both benefits and costs, and ensuring that individuals and groups are free from unfair bias, discrimination and stigmatisation … The procedural dimension of fairness entails the ability to contest and seek effective redress against decisions made by AI systems and by the humans operating them”.

Bias and discrimination against individuals is also a concern of the GDPR, in particular through its invocation of other laws on discrimination, etc. The ICO finds a requirement to consider impact on groups as part of the GDPR fairness principle. Fairness between groups is likely to be an issue requiring a wider ethical perspective. The ability to challenge and obtain redress for significant automated decisions based on personal data is a requirement of Article 22 of the GDPR.

From an education perspective, it is notable that the HLEG specifically mentions that “Equal opportunity in terms of access to education, goods, services and technology should also be fostered”. However the All-Party Group on Data Analytics warns against a “one-size-fits-all” approach: different institutions and different contexts are likely to benefit from AI in different ways. The European Parliament warns that “the deployment of new AI systems in schools should not lead to a wider digital gap being created in society”.

Explicability

HLEG: “processes need to be transparent, the capabilities and purpose of AI systems openly communicated, and decisions – to the extent possible – explainable to those directly and indirectly affected”.

Although there is still debate as to the extent of the “right to an explanation” that is contained within GDPR Article 13(2)(f) etc., the ICO’s guidance on Explaining AI Decisions appears to provide a comprehensive exploration of all the issues involved in both legal and ethical terms.

HLEG Requirements

To support its four principles, the HLEG identifies a non-exhaustive list of requirements for trustworthy AI. Any system that does not meet these requirements is unlikely to be trustworthy; however even meeting all the requirements may not be sufficient to be trusted.

Human Agency and Oversight

HLEG: “Including fundamental rights, human agency and human oversight”.

Where legitimate interests (Article 6(1)(f)) are used as a lawful basis for processing personal data, there is already a legal requirement to consider the effect on fundamental rights, which appears to meet the HLEG’s objectives. On human (subject) agency, the HLEG cites, as key, the GDPR’s provisions on fully-automated decisions in Article 22. On human (operator) oversight of AI it recognises that different levels of involvement are appropriate for different circumstances. As a general rule: as humans become more distant from individual decisions, “more extensive testing and stricter governance are required”.

Given the uncertainty over the scope of the GDPR automated decision provisions – which appear not to cover some circumstances (such as nudges) that concern the HLEG, but also to include decisions on contracts (such as bicycle rental) that fall well below the threshold of ethical concern – an ethical approach may provide a more appropriate perspective on the circumstances and tools through which human agency may need to be supported.

In education and the wider public sector, where public interest may be chosen to replace legitimate interest, the fundamental rights check may not be a legal requirement, but should be considered good practice and may contribute to institutions meeting their duties under the Human Rights Act 1998. The All-Party Group on Data Analytics note that AI in education is likely to be used to assist human decision-makers, thus providing the deepest (“human-in-the-loop”) level of human involvement in individual decisions. The European Parliament notes that “notes that AI personalised learning systems should not replace educational relationships involving teachers” and that this will require training and support for teachers, as well as pupils.

Technical Robustness and Safety

HLEG: “Including resilience to attack and security, fall back plan and general safety, accuracy, reliability and reproducibility”.

As well as incidents and attacks affecting AI systems and software, which should already be covered by GDPR’s “integrity and confidentiality” principle, the HLEG notes that attacks against training data may be effective. Security measures must both make this type of attack less likely to succeed, and be able to detect and respond to those that do. For some systems the appropriate response to an attack may be to revert to a rule-based, rather than statistical, approach to decision-making. Accuracy and reliability of predictions should be delivered by the testing and monitoring processes required by the second half of GDPR Recital 71 and by the ICO’s auditing framework (under “What do we need to do about Statistical Accuracy?“); reproducibility of behaviour is not an explicit GDPR requirement, and may prove challenging for systems using undirected learning. However the ICO’s risk-based approach to explainability and the HLEG’s comment that the requirement for explicability “is highly context-dependent” suggests that this may only be necessary for high-risk applications and contexts.

Accuracy of data and predictions may be a particular challenge for education systems where there is significant exchange of data across educational transitions and between institutions. When accepting information from another organisation, a school, college or university should be aware of the context in which it was collected: what individuals were told and the purposes for which the original collection and data validation processes were designed. When using such information for a different purpose, particular care is needed that this does not go beyond the limits of those processes: see Annex B of Jisc’s Wellbeing Analytics Code of Practice for further discussion of data re-purposing.

Privacy and Data Governance

HLEG: “Including respect for privacy, quality and integrity of data, and access to data”.

This appears to match the existing GDPR requirements. The ICO’s Guidance on AI contains an extended discussion of privacy, quality and integrity of data, and access to data.

For education, Jisc’s Codes of Practice on Learning Analytics, Wellbeing Analytics and Intelligent Campus provide more detailed guidance on all these issues.

Transparency

HLEG: “Including traceability, explainability and communication”.

With the broad scope of GDPR explainability adopted by the ICO (see the explainability principle above), the first two points should already be covered. As with Human Agency above, the ethical requirement to inform users of the presence of an AI may apply to a different scope to the GDPR’s “automated decision-making”. The GDPR’s focus on information for data subjects may omit the ethical requirement to communicate information about the system’s capabilities and limitations to its operators.

Specific to education, Jisc’s Learning Analytics Code of Practice discusses making relevant data labels available to individuals. The HLEG requirement that “AI systems should not represent themselves as humans to users” may require care when AI uses particularly human modes of communication, such as speech.

Diversity, Non-discrimination and Fairness

HLEG: “Including the avoidance of unfair bias, accessibility and universal design, and stakeholder participation”.

GDPR Recital 71 requires algorithm developers to avoid discrimination and detect signs of it emerging. The ICO’s Guidance on AI adds risks of bias in training data, and the risk of algorithms learning to (accurately) reproduce existing biases, whether deliberate or accidental, in the systems or processes being observed. Diversity of hiring is beyond the GDPR’s scope. GDPR requires accessibility of communications, but not wider accessibility of digital systems. Stakeholder participation should be considered (and for AI, normal practice) as part of the organisation’s Data Protection Impact Assessment (DPIA).

Most educational organisations will already be subject to additional accessibility obligations, both for websites and apps and, more widely, to make reasonable adjustments under the Equality Act 2010.

Societal and Environmental Well-Being

HLEG: “Including sustainability and environmental friendliness, social impact, society and democracy”.

This is much the broadest of the HLEG requirements, addressing impacts on society, the environment and democracy as well as the physical and mental wellbeing of individuals. These issues – which may be summarised as “should we do this at all?” and “should we use computers to do it?” – are almost entirely outside the remit of the GDPR.

It is notable that education is specifically mentioned as a field in which “ubiquitous exposure to social AI systems … has the potential to change our socio-cultural practices and the fabric of our social life”. As well as the UN Sustainable Development Goals (Goal 4 covers education, including tertiary), which the HLEG mention as a reference point, guidance may be found in the UN Human Rights Convention definition of the purpose of education, and the Convention’s requirements to protect both free speech and free assembly. Along similar lines the All-Party Parliamentary Group recommends that AI should be used to enhance the learning experience, not just as an administrative tool. A helpful starting point may be that there is likely to be more shared interest between the learner and the provider than in most other public and private sector relationships.

Accountability

HLEG: “Including auditability, minimisation and reporting of negative impact, trade-offs and redress”.

There appears to be a significant overlap between the HLEG’s concept of Accountability and the principle of the same name in the GDPR. Whereas the GDPR’s primary focus is on Accountability (including redress) for how the system is designed and operated, the HLEG adds some new requirements for accountability for errors, such as protection for whistle-blowers, trade unions and others wishing to report problems. The Information Commissioner’s Guidance on AI requires organisations to consider impact and to document trade-offs (under Auditing and Governance); in particular all AI applications processing personal data are likely to require a Data Protection Impact Assessment (DPIA). The ICO guidance on explaining AI considers how to choose algorithms that provide an appropriate level of auditability for each particular context.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *