Categories
Articles

Thinking about automation: Malware Detection

Sophos have recently released a tool that uses Machine Learning to propose simple rules that can be used to identify malware. The output from YaraML has many potential uses, but here I’m considering it as an example of how automation might help end devices identify hostile files in storage (a use-case described by Sophos) and also in emails. As usual, I’m structuring my thoughts using my generic model of a security automat (Levers, Data, Malice, Controls, Signals), and hoping the results are applicable to a general class of automation applications, not just the one that happened to catch my eye…

Levers. In Sophos’ system, the Machine Learning component doesn’t have any levers: it just creates a list of rules. The levers belong to whatever software that ruleset is fed into. If that’s a scanner that examines files in storage then presumably it will move any file that matches into a quarantine directory: fine if the match is correct, but potentially damaging or making the device unusable if there’s a false match when examining, for example, a critical operating system or software component. Typical actions when scanning emails – such as marking or filing a message – are easier to remedy when they are mistakenly applied to legitimate content. The most extreme response might be to block a particular organisation, website or IP address that is the source of content considered malicious. A false positive here will be inconvenient, though usually remediable, though I have come across examples of automats blocking critical services such as DNS resolvers…

Data. The machine learning component of the system takes as input two directories: one containing files considered malicious and one containing a similar number considered good. Based on these, Machine Learning identifies text fragments (“substring features” according to the documentation) that seem to be more common in good or bad files; YaraML’s output is a list of these fragments with weights indicating how strong an indication of good/badness they represent. Even a low-power end device should be able to search a new file for these fragments, calculate the weighted sum, and check whether it exceeds threshold. The quality of the rules clearly depends heavily on the quality of the input datasets; finding the necessary quantity of correctly classified samples might be a challenge, as the article suggests that 10,000 of each would be ideal. Statistical models can always misclassify: smaller training data sets might increase this probability, making Signals and Controls particularly important to detect and remedy when that happens.

Malice. The obvious way for a malicious person to affect the process is by way of the training data sets. If I can insert enough examples of my malware into the “good” collection (or even swap a significant number from your “bad” to “good”) then the resulting rules might provide false reassurance to the end devices. This stresses the need for secure and reliable sourcing of training datasets, and for security during the training and deployment processes. An interesting aspect of making the software available as open source is that different organisations might use different training sets to generate rules for the same malware. At least this limits the scope of any interference: subsequent (careful, to avoid cross-pollution!) comparison of rulesets might also help to detect this kind of interference.

Controls. Thinking about how an organisation might respond if it discovered it had deployed a rogue ruleset – either through malice or accident – the obvious control is to be able to un-deploy it from all end devices. This depends on the facilities provided by the application: anti-virus software is typically designed to add new rulesets quickly in response to new threats, but I’d want to check it could also remove those that turned out to be significantly harmful, or at least change the levers available. Thinking specifically about the risk of quarantining an operating system component, it occurred to me that it would be good to have a list of files that should be treated with extra care. It turns out that someone involved in Yara development had an even better idea: “YARA-CI helps you to detect poorly designed rules by scanning a corpus of more than 1 million files extracted from the National Reference Software Library, a collection of well-known, traceable files maintained by the U.S. Department of Homeland Security”. So destructive false positives against those files should be detected even before the rules are deployed. Nice!

Signals. An obvious desirable signal once a ruleset has been deployed is how many times it has been triggered. That’s relevant for both true positives (“how much badness do we have?”) and false positives (“how accurate is my rule?”). That needs some sort of feedback mechanism from the end devices to the deployers; for applications like email scanning it would also be useful to learn how often users disagree with the rule’s classification, for example by moving a quarantined message back out of the spam/malware folder. For file quarantining, an equivalent signal might come from helpdesk reports of “my machine stopped working”. But the Yara-CI idea suggests that it’s not just raw numbers that matter. A positive match in a folder belonging to the operating system is more significant than one in a user folder, whether it’s true or false. If true, then malicious code has managed to install itself into a particularly dangerous location: if false then there’s an increased risk that the mistaken quarantining action might have harmful consequences.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *