At the FIRST conference this week I presented ideas on how effective incident response protects privacy. Indeed, since most common malware infects end user devices and hides itself, an external response team may be the only way the owner can learn that their private information is being read and copied by others. The information sources used by incident responders – logfiles, network flows, etc. – could also be used to invade privacy, but I suggested three common incident response practices that should ensure our work will protect, rather than harm, privacy:
- Concentrate on constituency: each incident response team has a constituency, those within the constituency obtain the most direct privacy and security benefit from the team’s work. Incident responders will also obtain information from outside their constituency (the vast majority of attacking systems have themselves had their security and privacy compromised) but, except for the most serious threats to the Internet community, incident response should be limited to reporting external problems to those external parties best able to fix them;
- Minimise data and processing: whether using information to protect its own constituency, or to warn others of their problems, teams should only process or share information that is necessary for the task in hand. Processing unnecessary information both makes incident response harder and creates an unnecessary risk to privacy;
- Think about information flows: When sharing information, as well as only sharing information that is needed, teams should only share information with those who can use it. For example before sending details of compromised accounts to an affected service, check that the recipient is able and willing to use the information. Otherwise the sharing creates only a privacy risk, with no compensating benefit. Typically, incident response work involves notifying victims of security (and privacy) breaches, so any personal data is sent towards the person it relates to. This contrasts with (and should always be clearly distinguished from) law enforcement investigations, where personal data about an attacker is sent away from them, to support investigation or prosecution.
Those three practices also correspond to the balancing test that’s required under European privacy law. The draft General Data Protection Regulation states in recital 39 that protecting the security of computers, data and networks is a legitimate interest of organisations (similar wording is already contained in recital 53 of the privacy Directive for network operators). When processing personal data for a legitimate interest, organisations are required to ensure
- that the interest is legitimate;
- that the processing is necessary – i.e. that there is no less intrusive way to achieve the interest; and
- that the strength of the interest justifies the risk of harm to individuals.
The Directive and Regulation both say that incident detection and response are legitimate, minimisation should ensure that processing is necessary, information flows should ensure that the risk of harm is reduced. The balancing test described by the Article 29 Working Party of European Data Protection supervisors provides a final check for incident response: having minimised the risk to individuals, is the (potential) incident sufficiently severe to justify the risk that remains? If not then incident responders should not act until either the risk can be reduced or the likely severity of the incident has increased.
After the talk we discussed how incident response could be conducted in stages, gradually narrowing down on the serious problems. As the investigation gets deeper (and potentially more privacy invasive) the number of systems or accounts being investigated should reduce, and the confidence that they have a security problem increases. Such an approach can maintain the required balance between threat severity and privacy intrusion. The stage involving the greatest risk to privacy – identifying the people, rather than machines, involved in an incident – will normally occur last, when the threat is most certain and the number of people affected has been reduced as far as possible.
For example the initial processing of raw logs is almost always done by machine, so logs of normal activity need never be seen by a human. A subset of the log records will generate alerts, indicating a possible security problem, which are checked by a human to eliminate those that appear to be false positives. Deeper investigation of the likely real incidents can then be much more focussed on the data that relates to serious problems. Where it involves a significant risk to privacy, this investigation stage may require higher level management approval or the support of human resources staff. This requires tools that avoid displaying privacy-sensitive data that is not required at the current stage of response. Fields can either be omitted or masked using techniques such as one-way hashing. Masking should only be removed once there is sufficient confidence that a serious incident is involved, and should generate an audit log so that unnecessary privacy intrusions can be identified and dealt with.
Recent incidents involving the US Office of Personnel Management and the Lastpass password vault have shown the important of effective incident detection and response in protecting privacy.