Categories
Publications

Privacy and Incident Response

At a meeting of TERENA’s CSIRT Task Force last week, I presented an updated version of my paper on Privacy and Incident Response.

Responding effectively to incidents is essential to protect the privacy and other rights of individuals and organisations that use the Internet: compromises, phishing, etc. clearly infringe those rights. However incident response may itself infringe the privacy rights of both victims and attackers, since it normally involves at least looking at logs of their activity and sometimes the contents of messages or storage. Fortunately the law provides a framework for resolving such conflicts of rights and identifying when and how it is most appropriate to act. In my paper I’ve used the European Data Protection Directive (95/46/EC) as a guide, but having discussed my approach with incident response teams from other continents it seems that the same analysis should work for other privacy frameworks as well. In particular, though the Directive deals specifically with the privacy of individuals its principles also seem to work for information about organisations and, indeed, information that may affect an individual without being “associated with” them in the strict sense of the Directive.

The approach suggested in the paper is to consider the harm that may be caused by each action taken by the Incident Response Team (e.g. inspecting a phishing site’s log, informing victims that their credentials have been compromised and asking for help in analysing the phishing kit), and ensure that this is justified by the seriousness of harm that may occur if the Team does not act. Before taking an action that may cause serious privacy breach, a Team must be sure that the harm it is aiming to prevent is just as serious. In assessing the harm caused by a Team’s action the paper suggests three scales: what sort of identifiers are involved (within constituency, outside constituency,aggregated, none), how widely these will be disclosed (public, trusted community, affected service, responsible organisation, user, none), and whether the information was gathered through an unplanned process or a planned one where privacy considerations could be included in the design of the information gathering. To assess the seriousness of the incident that the Team is trying to prevent there are two scales: potential impact (global compromise, local compromise/DoS, personal loss) and who is affected (CSIRT, constituency, recipient of a warning, general Internet). To justify an action at the serious (left) end of the privacy scales requires a serious incident, at the left end of those scales, as well.

The paper applies this framework to a number of different examples of incidents and shows how it can sometimes be used to improve privacy protection while still dealing effectively with incidents. The new version adds new examples of dealing with a DDoS attack, and automated incident handling. Comments, suggestions and new examples are all welcome.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *