Customize Consent Preferences

We use cookies to help you navigate efficiently and perform certain functions. You will find detailed information about all cookies under each consent category below.

The cookies that are categorized as "Necessary" are stored on your browser as they are essential for enabling the basic functionalities of the site. ... 

Always Active

Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.

No cookies to display.

Functional cookies help perform certain functionalities like sharing the content of the website on social media platforms, collecting feedback, and other third-party features.

No cookies to display.

Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics such as the number of visitors, bounce rate, traffic source, etc.

No cookies to display.

Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.

No cookies to display.

Advertisement cookies are used to provide visitors with customized advertisements based on the pages you visited previously and to analyze the effectiveness of the ad campaigns.

No cookies to display.

Categories
Articles

Thinking about automation

To help me think about automated systems in network and security management, I’ve put what seem to be the key points into a picture. In the middle is my automated network management or security robot: to the left are the systems the robot can observe and control, to the right its human partner and the things they need.

Taking those in turn, to the left:

  • The robot has certain levers it can pull. In network management, those might block traffic flows, throttle or redirect them; in spam detection they might send a message to the inbox, the spambox, or direct to the bin. First thing to think about is how those powers could go wrong, now or in future: in my examples they could delete all mail or block all traffic. If that’s not OK, we need to think about additional measures to prevent it, or at least make it less likely (15).
  • Then there’s the question of what data the robot can see, to inform its decisions on how to manipulate the levers. Can it see content, or just traffic data (former is probably needed for spam, latter probably sufficient for at least some network management)? Does it need more, or less, information, for example historic data or information from other components of the digital system? If it needs training, where can we obtain that data, and how often does it need updating? (10).
  • Finally, we can’t assume that the world the robot is operating in is friendly, or even neutral. Could a malicious actor compromise the robot, or just create real or fake data to make it operate its levers in destructive ways? Why generate a huge DDoS flow if I can persuade an automated network defender to disconnect the organisation for me? Can the actor test how the robot responds to changes, and thereby discover non-public information about our operations? Ultimately, an attacker could use their own automation to probe ours (15).

And, having identified how things could go wrong, on the right-hand side:

  • What controls does the human partner need to be able to act effectively when unexpected things happen? Do they need to approve the robot’s suggestions before they are implemented (known as human-in-the-loop), or have the option to correct them soon afterwards? If the robot is approaching the limits of its capabilities, does the human need to take over, or enable a simpler algorithm or more detailed logging so that the event can be debugged or reviewed later? (14).
  • And, what signals does the human need, to know when and how to operate their controls. This could include individual decisions, capacity warnings or metrics, alerts of unusual situations, etc. What logs are needed for subsequent review and debugging (12, 13)?

Having applied these questions to the case of email filtering, they seem to be a helpful guide to achieving the most effective machine/human partnership. Also, and encouragingly, answering them seems to address most of the topics required for high-risk Artificial Intelligence in the draft EU AI Act (the numbers in the bulleted list are the relevant Articles and my visualisation). Whether or not these systems are formally covered by the final legislation, it’s always good when two completely different approaches get to similar answers.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *