Reading yet another paper on privacy and big data that concluded that processing should be based on the individual’s consent, it occurred to me how much that approach limits the scope and powers of privacy regulators. When using consent to justify processing, pretty much the only question for regulators is whether the consent was fairly obtained – effectively they are reduced to just commenting and ruling on privacy notices. And, indeed, a surprising number of recent opinions and cases do seem to be about physical and digital signage.
But in an area as complicated as big data, where both the potential risks and benefits to individuals and society are huge, I’d like privacy regulators to be doing more than that. It seems pretty clear that there will be some possible uses of big data that should be prohibited – no matter how persuasive the privacy notice – as harmful to individuals and society. Conversely there are other uses where the benefits to both should legitimise them without everyone having to agree individually. Privacy regulators ought, I think, to be playing a key role in those decisions, something that invoking “consent” prevents them from doing.
There is an existing legal provision that would let regulators discuss much meatier questions: whether processing is “necessary for a legitimate interest” and whether that interest is “overridden by the fundamental rights of the individual”; however until recently it hasn’t been much used. The Article 29 Working Party’s Opinion on Legitimate Interests is a promising start, but it would be good to see regulators routinely discussing new types of processing in those terms. Looking at big data, and other technologies with complex privacy effects, explicitly in terms of the benefits they might provide and the harms they might cause – maximising the former and minimising the latter – seems a much better way to protect privacy than simply handing the question to individuals and then considering, after it is too late, whether or not their consent was fairly obtained.