Categories
Articles

Automating Digital Infrastructures

Most of our digital infrastructures rely on automation to function smoothly. Cloud services adjust automatically to changes in demand; firewalls detect when networks are under attack and automatically try to pick out good traffic from bad. Automation adjusts faster and on a broader scale than humans. That has advantages: when Jisc’s CSIRT responded manually to denial of service attacks it took us about thirty minutes – remarkably quick by industry standards – to mitigate the damage, now we often do it before the target site even notices the attack. But automation can also amplify mistakes: years ago I remember working with one site whose anti-virus update deleted chunks of the PC operating system, and another whose firewall had decided that responses to DNS queries were hostile and to be discarded. Automation has power, in both directions!

So it’s interesting to see suggestions that a future European law on Artificial Intelligence might be further broadened to classify digital infrastructure operations as a “high risk” use for AI, on a par with industrial and public safety measures. [UPDATE: it’s been pointed out that page 48 of the UK’s Cyber Security Strategy also mentions “Use of AI to secure systems. Many AI algorithms are complex and opaque, with the potential to introduce new classes of vulnerability. Understanding and mitigating these emerging threats is a priority”]. That definitely doesn’t mean AI is banned, but that its power is expected to be wrapped in a lot of human thought. Leaving an AI to just get on with it isn’t appropriate. But nor should a human be approving every decision: that throws away much of automation’s potential benefit.

Instead the model (in Articles 9-20 & 29 of the original Commission draft) seems to require a close and continuing partnership between human and AI capabilities. Humans must design appropriate contexts and limits within which AI can work, taking account of both the risks of too much automation and too little; inputs and training data must be appropriately chosen and updated; there must be documentation of how the system was intended to behave, and how it actually did; live information, monitoring, authority and fallback plans must enable humans to take over quickly and effectively when the AI is no longer behaving appropriately. There are repeated reminders that systems may be working in actively hostile environments. As with my old DNS example, attackers may try to deceive or confuse the AI and turn its power to their advantage. Humans will definitely be needed to identify these possibilities, design precautions against them, and respond when the AI is recruited to the dark side. Providing excellent digital infrastructures will need both excellent AI and excellent people, working together.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *