Categories
Articles

AI: Regulation isn’t enough

Alan Shark’s SOCITM ShareNational keynote looked at why regulation is not sufficient to deal with emerging technologies, and the complementary role that needs to be played by ethics.

Although privacy is not the only threat posed by such technologies, it does seem to be the one that has got people interested in the debate, whether over face recognition, tracking by apps, surveillance cameras, biometrics, smart “speakers” (actually, microphones) and deep fakes. It’s no longer just privacy activists who are worried about how much we give away to these and other applications, asking how long this is kept, where it is stored, who has access, who it might be shared with: who does, and who should, decide? The GDPR was called out as a leader in both its regulatory and ethical aspects.

Most technologies have the potential for both socially beneficial and socially harmful uses, so simple technology bans will have unintended effects: an Illinois ban on automated face-recognition makes it illegal to sell a robot dog that can recognise its owner. Whether this is an acceptable loss, or whether the law needs to be more nuanced, is an ethical question. Should we ban uses, rather than technologies? But then, how to define the uses: can regulation deal with the viral spread of conspiracy theories on social media and, if so, what levers should it apply? Again, a difficult, ethical question.

Although “AI” may be seen as the most urgent area to address, we need to break that term down. At least three categories can be identified:

  • Systems that analyse large datasets to derive (relatively) static general-purpose insights or capabilities: much of “big data” falls into this category along with things like speech and image recognition;
  • Systems that use individual data to inform human decision-makers. Perhaps “Augmented Intelligence” is a better term for these, since humans can – at least in principle – interrogate the system’s suggestion and ignore or override it;
  • Systems that make autonomous decisions, learn from what happens, and modify themselves. Here there may be no way to either predict or understand the system’s decisions or – depending on the effects of the decisions that were assigned to it – to override or reverse those consequences.

In the first of those the main focus of ethics is probably on the choice of application and data (the answers may then lead to graduated regulation, as in the GDPR/ePrivacy Directive) and how systems are trained and tested; in the second on ensuring that humans are actually able to make appropriate decisions; in the third ethics may identify domains and kinds of decision where we simply do not want to develop or deploy such tools.

And, though privacy is a good starting point, our ethical thinking needs to range wider, on both the technological and consequence sides. What happens if inaccurate data are fed into new technologies? Are assumptions reliable (e.g. that blockchain is immutable, even in the presence of market dominance)? Uses have been proposed – some already implemented – that could lead to bad decision making; loss of life or destruction of property; loss of justice (even if “perfect” judges were possible, could society cope?); irreversible decisions; distrust of Government and technology. We need to consider those risks as well as the benefits that the technologies might deliver.

For these discussions to take place, ethics needs to be recognised as a complementary tool to regulation. At the moment technology ethics is increasingly being included in HE science curricula; making it part of policy and public administration ones as well might help increase its visibility. Ethical review boards – by government, region or discipline – might examine emerging technologies and suggest an appropriate balance of ethical and regulatory tools for each. Standards, testing processes and certification of ethical engagement (in the whole development process, not just the operation of technology) might inform markets and, where appropriate, legislators.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *