Categories
Articles

Privacy Law Amendments Could Hinder Response to Privacy Incidents

One of the areas of network operations where it’s particularly tricky to get legislation right is incident response, and recent amendments proposed by the European Parliament to the draft Data Protection Regulation (warning: 200 page PDF) illustrate why.

Most incidents involve computers, passwords, credit card numbers and so on falling into the hands of the wrong people. That’s clearly a serious privacy problem for the legitimate owners and users, so a law that aims to protects privacy ought to help incident response teams do their work of detecting incidents, informing the victims and helping them recover. The Commission’s draft Regulation recognises (in Recital 39) that:

The processing of data to the extent strictly necessary for the purposes of ensuring network and information security … by public authorities, Computer Emergency Response Teams – CERTs, Computer Security Incident Response Teams – CSIRTs, providers of electronic communications networks and services and by providers of security technologies and services, constitutes a legitimate interest of the concerned data controller.

The Parliament’s amendments add some rather specific examples of attacks, but then say that the permission should only apply “in specific incidents”. Unfortunately this seems to create a chicken-and-egg situation, because you need to collect data (for example about network flows) and analyse it for anomalies in order to find out about incidents in the first place. Knowing that there has been an incident should lead to (and justify) additional processing of the specific information related to it, but if that’s the intention this seems an unfortunate way to phrase it.

In fact one of the most common questions in incident response is which information is covered by personal data law anyway. European law has never been entirely clear whether an IP address counts as personal data (the Commission’s draft unhelpfully says in Recital 24 that they “need not necessarily be considered as personal data in all circumstances”). The Parliament try to improve on this, suggesting that they will be personal data if they “can be used to single out natural persons” even if those natural persons can’t be identified. Only if “identifiers demonstrably do no[t] relate to natural persons” will they not be considered personal data. However Parliament’s example then restores the confusion by giving as an example “IP addresses used by companies” – do they mean addresses of servers, NAT devices or DHCP pools? Under UK law the third and possibly second of those would be personal data in the hands of the company though probably not anyone else. And, unfortunately for the anyone trying to comply with the law, once they leave the company all those addresses look pretty much like any other ones, they certainly aren’t demonstrably different.

The Parliament do add in Article 4(2a) a new concept of a ‘pseudonym’, which is an identifier that can be used to single out but not to identify. The lack of that distinction in the current Directive causes a lot of problems. Unfortunately IP addresses don’t qualify as pseudonyms under this definition, as they are not “specific to one given context”. And in fact the new definition is used very little in the rest of the Regulation, despite a suggestion in the commentary that pseudonyms offer “alleviations with regard to the obligations for the data controller”. The only change I can see is in Article 10, which already recognised that there were identifiers that don’t allow identification (like IP addresses) and excused data controllers from the legal duties that those identifiers didn’t permit (e.g. proactively communicating with the data subject).

So it seems that incident response teams will need to treat most of the things they deal with as personal data. That means they need to have a justification for processing them, and Recital 39 suggests that should be the legitimate interests of the team, as data controller (Article 6(f) in the Regulation, a new Article 6(1a) in the Parliament’s amendments). However the Parliament seem concerned that that justification has been abused in the past (personally I’ve seen many more abuses of the “consent” justification) so have added extra conditions to it. Article 6(1a) requires that anyone using the justification must “separately and explicitly” inform all those whose data they process. For some incident response data, such as IP addresses, that won’t be technically possible so presumably the obligation is waived under Article 10. But if investigating a compromised system that has been used for spamming means you have to send another e-mail to every recipient of the spam to let them know that you are processing their personal data then this makes the incident worse, not better.

Another, apparently minor, change to the wording of the justification will further restrict what incident response teams can do to protect privacy, though it seems this one was made by the Commission rather than the Parliament. Compromised computers are frequently used in phishing attacks to collect passwords (for example for on-line banking) and credit card numbers. Many incident response teams will voluntarily let the affected services know about these, even though it doesn’t affect the security of the incident response team’s own network. Under Article 7(f) of the current Data Protection Directive that is clearly lawful because the team is allowed to process personal data “in the legitimate interests of the third party to whom the data are disclosed”. Unfortunately that part of the justification seems to have been removed, and I can’t see any other justification that covers this situation. So an action that currently contributes to privacy could become unlawful under this change.

The legitimate interests justification has always been more limited than the others because it requires both that processing is “necessary” and, even if necessary, that it does not override the rights of the individual. The Parliament’s amendments would require those using the justification to publish how they were protecting the individuals’ rights (I made some suggestions on this in a paper for TERENA’s CSIRT Task Force). And, to “give clearer guidance and provide legal certainty”, the Parliament have provided two lists: one in Article 6(1b) of circumstances in which legitimate rights will “as a rule” prevail and one in Article 6(1c) of circumstances when fundamental interests will. There’s no indication of what should happen when an activity appears on neither list or, as could easily happen, on both. Compared with the current law, where organisations need to consider the balance between the two factors for each activity, the lists seem to me to reduce, rather than improve, privacy protection. Someone who has been harmed by activity on the Internet might note that the first list includes “necessary for the enforcement of the legal claims of the data controller…, or for preventing or limiting damage by the data subject to the controller”, and conclude that any privacy-invading action to protect their systems or legal interests will be permitted. Under the current Directive there have been a number of European cases making very careful and detailed judgments on when privacy may, and may not, be required to give way to other legal rights. It would be unfortunate if a privacy law were to replace those with something that looks like a blanket permission.

In fact, on closer inspection, that item on the list won’t cover most incident response activities anyway, since it’s not usually the data subject (the registered user of the IP address) who is causing damage to networks and services, it’s someone else making unauthorised use of their computer.

In this article I’ve concentrated on just one internet service, though one that’s widely recognised as essential to protect the security of individuals, organisations and governments on-line. In legislation that aims to regulate so widely (essentially any activities using internet protocols will be covered) and at such a fine level of detail, it seems inevitable that there will be similar unexpected and undesirable consequences for many others. It’s important that as many as possible of these are spotted and fixed before the proposal becomes law.

[Summaries of the amendments have been published by Hogan Lovells and Amberhawk]

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *