Categories
Articles

Does the AI Act allow automated network defence?

In response to my posts about the relevance of the draft EU AI Act to automated network management one concern was raised: would falling within scope of this law slow down our response to attacks? From the text of the Act, I was pretty sure it wouldn’t, so I’m grateful to Lilian Edwards for the light-bulb moment that not only it won’t, in practice, but it can’t, in principle.

That’s because the AI Act follows in a long tradition of European product liability/safety laws. These regulate how products are developed but say almost nothing about how they may be used after sale or supply. In the case of the AI Act, Articles 9 to 19 apply to the designers and providers of high-risk AI systems, but only Article 29 applies to users, and simply requires them to use the system in accordance with the instructions.

So the AI Act does suggest how to think about AI during its development. As I’ve suggested in a previous post, those are exactly the kinds of thought we should be having anyway, to reduce the risk of our automation going rogue (perhaps encouraged by a badguy). By insisting, in Article 14, that the system provides all the human-machine tools that the user might need to enable effective oversight and control of operations, the Act should even increase the flexibility and speed with which we can deploy and use automation. Might we want the possibility of fully-automated defence? If so, Article 14 reminds us to think, during software development, about the tools we will need to do that safely. Do we need the option to step in, audit and debug the automat’s behaviour? If so, then Article 12 reminds us to build in the controls and records those actions and processes will need.

Where a literal interpretation of law might be a problem is Article 22 of the GDPR, whose opaque wording (a source of confusion since at least 2001) now seems to be interpreted as a ban – rather than a right to request human review – on “decision[s] based solely on automated processing … which produce[] legal effects … or similarly significantly affect[] him or her”. For a full discussion, see chapter 6 of my paper on incident detection. The problem is that, unlike the corresponding text in Recital 38 of the Law Enforcement Directive, the GDPR omits the  word “adverse”. So an over-literal reading could read this as prohibiting automated processing with significant beneficial effects. Say, for example, protecting an individual’s data and rights against spam & malware (approved by the Article 29 Working Party in 2006), distributed denial of service attacks or, the most recent application, ransomware

Does the GDPR really compel us to wait for individual human approval for automated defensive measures that Regulators have supported for fifteen years? Fortunately, the approach to interpretating European law is “purposive”, so a legally valid response to such a suggestion would be “that’s nuts”.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *