Some very interesting and positive messages came out of this week’s Future of Data Protection Forum. Interestingly the forum didn’t just focus on the draft European Regulation: partly because the final state of that is still unclear, but also because there was general agreement that reputable organisations shouldn’t aim merely to comply with data protection law. A reputation for using personal data responsibly is going to be a key business asset, which means that privacy and data protection people need to be involved in senior level discussions. Chief Privacy Officers/Data Protection Officers need to be able to propose business solutions, which will require knowledge of legal, ethical and engineering issues. In future the role isn’t going to be limited to handling Subject Access Requests!
The Regulation seems likely to favour self-regulatory schemes such as industry Codes of Conduct. However the recent case of Schrems has highlighted the risks of those – Safe Harbor is a self-regulatory scheme too. Organisations need to develop and support strong schemes, allowing regulators to recognise good practice rather than resort to compliance box-ticking. Schemes must include international transfers as these are essential for most EU organisations: if providing appropriate safeguards requires action at the political and diplomatic level then perhaps businesses should be making that case more strongly.
If organisations want to use personal data appropriately then Privacy Impact Assessments (PIAs) will play an increasing role. Again, these will not just be a compliance requirement, so they need to be built into project lifecycles like other risk management and security plans. PIAs should take place early, as soon as there is sufficient detail of a project to allow its privacy issues to be assessed and while it is still possible to adapt the project to include appropriate privacy-protecting measures. Scoping questions can be automated, and linked to data protection education – one organisation uses drop-down menus to let project planners identify likely risks and controls. More detailed discussions can be used to resolve competing issues.
The link between incident response and privacy was made very strongly. High-profile data breaches mean that customers and journalists, not just regulators, now expect organisations to detect and respond quickly and effectively. Senior managers now need to be able to explain their organisation’s security measures in public. A number of organisations have arranged ethical hacking demonstrations and incident response war games for their senior management teams: it’s no longer hard to develop plausible, scary scenarios. These should be used to explore hard decisions – do we need to off the website yet? Post-breach reviews can be extremely valuable if they are open and honest: “what can we do to stop that happening again?”, not “who was to blame?”. Mandatory breach notification should mean regulators can help industry sectors improve their security (see for example ENISA’s work with telecommunications regulators), but there was concern that some may not have the resources or skills to make effective use of the increased scope of reports under the new Regulation.
Finally, Big Data was identified as another area where an ethical approach might be more helpful than a compliance one. The Regulation has a particularly complex and unclear mesh of requirements on purpose limitation, data retention, rights of deletion/objection and profiling. Regulated sectors may add further overlapping duties from their own compliance responsibilities. Fortunately the Regulation also supports high-level approaches such as PIAs and Privacy by Design, which may help in navigating this essential area. The law makes clear that “because marketing” will no longer be an adequate justification; customers and society will increasingly be applying their own “creepiness test” to all our activities.