Categories
Articles

Vulnerability Disclosure: Why are we still talking about it?

Ben Hawkes, from Google’s Project Zero, gave a fascinating keynote presentation on vulnerability disclosure policies at this week’s FIRST Conference. There is little disagreement about the aim of such policies: to ensure that discovering a vulnerability in software or hardware reduces/minimises the harm the vulnerability subsequently causes. And, to achieve that, there are only really three things a vulnerability discoverer can control: Who to tell, What to tell them, and When. So why have we been debating the answers since the Security Digest and Bugtraq lists of the late 1980s and early 1990s, still without reaching a conclusion?

Mostly it comes down to that word “harm”: how do we measure it, and how do we predict it.

First, different people and different organisations will have different measures of “harm”. Is a vulnerability that allows a million PCs to be used for cryptocurrency mining more or less harmful than one that allows a trade negotiator’s communications to be read by an adversary? Are we concerned about short- or long-term harms? Harms to individuals, organisations or societies? There probably is no single right answer to these questions, in which case the best that organisations processing vulnerabilities can do is to think what their answer is, and document it so others can understand their behaviour.

Where there does seem to be scope for improvement is in our collective ability to assess the likelihood of a particular vulnerability causing harm. This needs an assessment of how attackers who know about the security weakness are likely to use that knowledge: before, during and after our discovery and disclosure process. How many of our vulnerabilities were already known and being silently exploited? If we discover it, how long before an attacker does? How much skill do they need to exploit it? How are they motivated to use it? Will our disclosure significantly change the situation?

Here we are dealing with incomplete information. Attackers have strong incentives to conceal their actions and reasons from us, maybe even to actively mislead. The ones we do find out about are the “failures”: the attackers who got caught. We may be able to learn a little from these, but it’s the successful ones we really need to be able to predict. I was reminded of the “bullet-hole misconception”, though here we are observing only the attackers that failed to escape, rather than only the planes that succeeded in doing so.

Statistical efforts to fill in this information gap include FIRST’s Exploit Prediction Scoring System SIG. But Ben suggested another angle on this might be to look at ourselves: ask experts to predict which recent vulnerabilities will cause significant harm, record their predictions, later test those predictions against what actually happens, and then try to understand how the more successful experts do it?

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *