Recently I had a thought-provoking discussion on Twitter (thanks to my guides) on the practice of setting your users phishing tests: sending them e-mails that tempt them to do unsafe things with their passwords, then providing feedback. I’ve always been deeply ambivalent about this. Identifying phishing messages is hard (see how you do on OpenDNS’s quiz), and creating “teachable moments” may well be a good way to help us all learn. But if what we learn is “can’t trust IT, they’re out to trick us” or “this looks like a phishing mail, but it’s probably only IT running another test” then it will have gone horribly wrong.
It seems to me that the difference between success and failure is going to be less about technology and much more about how the organisation treats the exercise. Whether you want to host a programme in house or use a commercial service, there are plenty of technology options available. So here are some very tentative thoughts on how we might make success more likely. I’d love to hear if anyone has tried these and whether or not they worked.
Fundamentally, the word “test” worries me. We all get plenty of phishing tests in our inboxes already. And some of us who are caught out by those will then report ourselves to the helpdesk. If we’re running an internal exercise, we ought to be doing something different: first motivating users to look out for phish, and second improving our ability to accurately distinguish phish from genuine e-mails. Shaming (either privately or publicly) those who fall for frauds doesn’t seem a great way to do either of those. Clearly they need to have training materials brought to their attention, but that can be done within the computerised part of the system (“you clicked on a phishing link, here’s how not to fall for it next time…”). So I wonder whether the organisation actually needs to know the identities of those who clicked at all? Statistics might well be useful, not least to see whether the organisation overall is reducing its risks, but might users view the exercise less negatively if we promise that that’s all we’ll collect? That does mean we can’t use the exercise results to target those who just can’t help clicking, but we can probably find them already in our helpdesk or system logs.
On the other hand we do want to recognise are the individuals who can quickly and accurately spot and report phishing e-mails, helping to keep both themselves and others safer on line. That behaviour is well worth rewarding, whether the phish they report are real ones or part of the exercise. Rewards – whether traditional chocolate or twenty-first century “gamification” – feel like a promising area to investigate. And if those rewards are public, then we need to support their recipients too. If we get the exercise right, then colleagues will be asking them “so how do you tell the difference?”. If that happens, then the exercises really have been a success, and maybe we won’t need to run them any more!
UPDATE: 2020. A news story about a phishing test gone wrong has added another thought. To be effective your test has to be accepted by all its recipients. If your “hook” is so outrageous that recipients tweet about it – either in admiration or disgust – then you’ll never know whether the late-comers actually detected it, or were forewarned on social media!