Earlier in the year, Networkshop included a presentation on Juniper’s Mist AI system for managing wifi networks. I was going to look at it – as an application I don’t know – as a test for my model for thinking about network/security automation. That may still happen, but first it has taken me down an interesting diversion…
The product video shows two use cases: first, identifying when an access point needs a firmware upgrade and (once approved by a human) doing it; second, tracing from an intermittent client connectivity failure through to a possible root cause in DHCP pool management. In terms of the human/automat relationship, those seem to represent different reasons for automating. The first is, perhaps, the traditional application of automation in the physical world, where humans are doing a repetitive task that could be scripted. Not necessarily a simple task – one of the benefits of automation is that a long sequence of actions can be performed consistently – but a repeated one. The second seems to challenge human capability from the opposite direction – a one-off situation with so many possible resolutions that working out which are most likely to be right involves more data, more knowledge and, perhaps, more linkages than most humans can keep track of. Here automation helps by refining down the whole solution space to a small enough number that a human can examine each one, work out its consequences and at least plan the implementation of the one they choose.
Which takes me back to my picture, but at a higher level, of how automation works and how it might evolve.
The simple tasks spend most of their time in the left-hand (“automated”) loop, but we need an automat that can detect and alert us when something about the task has changed so the original automation “script” may no longer be appropriate. Then the human can step in, perhaps redefine the boundaries and/or update the script for the new circumstances.
The complicated tasks spend most of their time in the right-hand (“human”) loop, but an automat may be able to spot patterns – either in the technological context or in how humans respond to it – that help it make occasional suggestions. This could be anywhere between a traditional expert system approach (“if connection failures, look at DHCP”) and something more like data-driven machine learning (“which kinds of logfile entry are correlated with this kind of problem report?”). Some humans can do that unassisted – I remember an amazing presentation at a FIRST conference many years ago where a human analyst pivoted through a series of data sources, including image hashes, to link together apparently isolated malware incidents. But most of us could probably use the occasional suggestion from an automat that can look at far more datapoints and possible connections than we can. Ideally we’d involve both the human “I remember one of these…” and the machine “oooh, data…” approaches. And, by providing (cross-)feedback on the results of collaboration, maybe even improve both?
For a final squeeze on the diagram, could we use it to explore moving some (sub-)problems from one side to the other? If part of a simple problem evolves in a way that we need to human-approve those solutions, can we define which part, and help the automat send the right situations for approval? Or, coming the other way, identify some parts of complex problems that we understand sufficiently well (at least, for now) that they can transition at least from human-advice to human-approval, and maybe even to fully automated?