Decisions whether or not to use Artificial Intelligence (AI) should involve considering several factors, including the institution’s objectives, purpose and culture, readiness, and issues relating to the particular application. Jisc’s Pathway Towards Responsible, Ethical AI is designed to help you with that detailed investigation and decision-making.
But I wondered whether there might be a check that can be done in a few minutes, to get an initial feel for whether a particular use of AI is likely to be a good institutional fit. So here’s a proposed scale of “AI Terrain Roughness”, using objective factors that should be easy to determine from the documentation of a candidate product or service. It only covers some of the relevant factors, but if a glance at this “Terrain” seems to fit your level of experience and comfort with data and AI, then it’s worth moving on to the more detailed investigation.
The scale tries to capture the complexity that’s likely to be involved in using AI for a particular purpose, at all levels from technical and legal to organisational consultation and communications. I’ve chosen a hiking metaphor: for even a simple exploration you should know the route and weather forecast; whereas a complex AI project will require good preparation, skills and equipment, take significant time and teamwork, probably involve setbacks and some (institutional) discomfort. On deeper investigation, you may conclude that the benefits of a particular application do justify that complexity, and experience may give you confidence that you can deal with any problems. But a three-mountain application probably shouldn’t be your first venture into AI.
The factors considered are
- the kinds of data used, or generated, by the AI;
- the technology used, or purpose for which it is deployed;
- the degree of prior control over how the AI is configured for, or learns about, its environment; and
- the need to integrate it with other existing systems and data.
For each factor the scale is based on either legal (data & purpose) or technical (learning & integration) distinctions that have been identified as making AI more or less challenging. If you really want to reduce this to a single number, choose the highest of the four.
|Data||This factor records the most sensitive type of data that the system appears to use or generate. Definitions are taken from the General Data Protection Regulation.|
|No use or creation of personal data.|
|Uses or creates personal data (“data relating to an identified or [directly or indirectly] identifiable natural person … in particular by reference to an identifier such as a name, identification number, location data, online identifier or one or more factors specific to … that natural person” – GDPR Art.4(1). Note that this is much broader than the US concept of “personally identifiable information”).|
|Uses or creates special category personal data (racial or ethnic origin, political opinions, religious or philosophical beliefs, trade union membership, biometric, health and sexual – GDPR Art.9).|
|Purpose||This factor reflects how regulators assess the risk of using AI with a particular technology or for a particular purpose. Sources are primarily the EU draft Regulation on AI, UK and EU case law, and statements by national data protection regulators, particularly where these relate to education.|
|Technology/purpose is regarded as low-risk (i.e., typically, not mentioned) in law, regulation and cases.|
|Technology/purpose is regarded as high-risk in law, regulation and cases (including determining access to education or course of someone’s life, e.g. assessing students’ performance; automated decision-making).|
|Technology has been banned for some purposes or contexts (e.g. behavioural manipulation or profiling likely to cause harm or unjustified disadvantage; face recognition and other forms of remote biometrics; potentially discriminatory data sources).|
|Learning||This factor looks at how the AI is programmed or “learns” about its local environment. As discussed in the AI Pathway, this includes the degree of control over the inputs from which the AI learns, the method by which it learns, and the range of outputs it can produce. Less control lets the system behave in unexpected ways, which may be desirable in some contexts but not in others. Managing the risk of unexpected behaviour adds complexity.|
|Learning/programming by well-understood methods to map from a known set of input data to a defined set of outputs (risk can be managed by design and comprehensive testing).|
|Learning where outputs are constrained (e.g. to a pre-defined set of decisions or categories), but either the input data or method are unconstrained, e.g. supervised or goal-based learning (risk of undetected or inexplicable learning failures, including bias).|
|Real-time learning, feedback loops or uncontrolled outputs (risk of emergent inappropriate behaviour).|
|Integration||This factor considers whether the AI can be, or must be, integrated with other systems. Such integration increases the technical/effort requirement, but also creates risks that must be managed. AI that consumes data from other systems can amplify problems (such as data quality or understanding of process): AI outputs consumed by other systems may lose essential qualifiers or caveats. Either may produce unexpected or harmful results.|
|AI system is designed to operate standalone (e.g. the natural language processing in an informational chatbot).|
|AI system can operate standalone, or be integrated with other data and services (e.g. adding transactional ability to a chatbot).|
|AI system can only function if integrated with other data and services (e.g. a guided learning system that needs information from the virtual learning environment).|