Article 22 of the GDPR contains a new, and oddly-worded, “right not to be subject to a decision based solely on automated processing”. This only applies to decisions that “produce[] legal effects … or similarly significantly affect[]” the individual. Last year, the Article 29 Working Party’s draft guidance on interpreting this Article noted that an automated refusal to hire a bicycle – because of insufficient credit – might reach this threshold.
This raised the concern, discussed in our consultation response, that automated processes that the Working Party has previously approved of – such as automatically filtering e-mails for viruses and spam – might now require human intervention. They do, after all, aim to cause disadvantage to the person who hopes to hold your files to ransom.
Fortunately the Working Party’s final guidance, published this week, clarifies that the threshold is, in fact, much higher than this. Their examples of “serious impactful” effects are now at the level of automated refusal of citizenship, social benefits or job opportunities. So automation to defend our systems, networks and data against attack should be well within the boundaries where normal data protection law, not Article 22’s special provisions, apply.
Interestingly there’s also a suggestion that some flexibility may be allowed where the volume of data makes human inspection impractical. Although GDPR Recital 71 mentions ‘e-recruiting practices without any human intervention’, the example on page 23 of the guidance approves of automated short-listing where the volume of job applications makes it “not practically possible to identify fitting candidates without first using fully automated means to sift out irrelevant applications”.