Some security incidents need more than a technical solution. Two talks at this week’s FIRST conference looked at the importance of human factors, in crisis management and vulnerability handling.
Jaco Cloete looked at situations where a cyber-incident can become a business incident, causing reputational damage, social media fallout, loss of market share, regulatory fines, even a liquidity crisis. These need a Cyber Crisis Management Team (CCMT) to coordinate between stakeholder groups and internal teams, the latter including the CSIRT doing the technical investigation. The core of the CCMT should be the CISO, CIO, executive responsible for risk, and executive responsible for the affected business unit. Others who may be brought in – as required by the nature and progress of the incident – include legal counsel, insurance, social media, CSIRT, crisis management experts, contact centre, investor relations and forensics. Where the organisation does not have these functions in-house, it may need to engage external help. The CCMT needs to meet regularly, at least daily, in an appropriately secured and resourced “war room”, not dependent on any infrastructure that may have been compromised. Even organisations whose size does not justify maintaining such a facility permanently should have plans, policies and protocols in place to create and operate one on demand. The decision to invoke these steps (something else to document in the crisis plan) may be based on publicity; sensitive client data; market, industry, regulatory or operational impact. One major job of the CCMT is to facilitate all communications – both external and internal, though in a crisis the difference should not be relied upon. Key points for the communications strategy: regulators and clients/victims first; honesty/trust; show competence; regular updates; social media/microsite; facts, not speculation; right tone (clients are victims, not you – apologise to them); respect confidentiality.
Mark Stanislav looked at another situation where communications are critical: receiving reports of vulnerabilities in products or services. He suggested treating this as a customer support function, rather than a technical one, and using personas to help staff maintain appropriate communications with those reporting vulnerabilities. At a minimum, a persona should suggest the likely motivations, pain points, technical expertise and community visibility of the individual you are talking to. This may be more natural if those are set within an appropriate back-story, name and (stock) image. It occurs to me that a pre-defined persona might also provide a flag when someone is behaving “out of character” and may need particular care. The personas need to be realistic. Specifically for bug reporters, HackerOne have some very useful statistics on why people find bugs – is it a hobby, an educational project, a job, or did they find it accidentally in the course of their work (even I have done this!). This may well give clues about how they may react to both successful and unsuccessful engagements: will the story be told at a major conference, on a popular social media feed, in an essay, or forgotten in mutual embarrassment. But there are also more subtle signals to bear in mind: a professional researcher will expect (and be worth) individual treatment, but for a hobbyist with a fuzzing tool it’s actually more dangerous to leave the standard script and risk being seen as not delivering on what you promise. It may be tempting to respond to all calls with “that’s a really interesting bug”, but don’t do that unless you are in a position to follow through on the implied promise. People who choose to report bugs to those who can fix them, rather than sell them on the black market, are demonstrating good intent. But don’t be complacent. Failing to understand and respond to their reasons for doing so can quickly turn a friend into an enemy.
[Typing that reminded me of another possible source for real researcher personas, and an excellent read: Chris van’t Hof’s Helpful Hackers (e)Book]