A couple of new documents provide ideas on how to think about ethics when we deploy Artificial Intelligence.
First is an article by Linda Thornton for EDUCAUSE, on Artificial Intelligence and Ethical Accountability. This looks at who should be thinking ethically, finding responsibilities for programmers, managers, marketers, salespeople and organisations that implement AI. Since this is an EDUCAUSE article, it focuses on Higher Education Institutions in their role as purchasers of AI, and proposes a five-step approach – with lots of references – to selecting and using AI ethically.
Second is the latest document from the European Commission’s High-Level Expert Group on Trustworthy AI (HLEG): the Assessment List for Trustworthy Artificial Intelligence. This has specific questions relating to each of the seven requirements set out in the HLEG’s Ethics Guidelines for Trustworthy AI: Human Agency and Oversight; Technical Robustness and Safety; Privacy and Data Governance; Transparency; Diversity, Non-discrimination and Fairness; Societal and Environmental Well-being; and Accountability.
The authors note that answering these questions should involve discussions among a multi-disciplinary team: given that the questions range from whether the AI is likely to become addictive and its effect on the environment and “other sentient beings” to technical questions about the security of data and the stability of algorithms in the face of data attacks, those would be fascinating meetings to be involved with.
One oddity is that, whereas I’ve previously noted that GDPR compliance seemed a good (and, if using personal data, essential) starting point for the HLEG requirements, this Assessment list seems to take things the other way around, suggesting that the GDPR is a useful source when completing the assessment. “Protect[ing] personal data relating to individuals in line with GDPR” is mentioned, but only as one of the things that “might” be included in a prior Fundamental Rights Impact Assessment.
That seems to run the risk of missing both useful guidance and legally-required measures. For example there’s no mention under Transparency of the information requirements in GDPR Articles 13 & 14; nor in Accountability of the overlapping GDPR Principle of the same name. Even more fundamentally, there’s no mention of the need to define a legal basis (GDPR Articles 6 & 9) for processing personal data, nor to check purpose compatibility when reusing data. Those may be challenging for some AI systems – though perhaps not as challenging as is sometimes claimed – but that can’t be a reason to ignore them.