There seems to be a widespread perception that “AI is creepy”. But at the same time as reacting strongly against an app that would check social media posts for signs that we were struggling to cope, we don’t think twice about the grammar checker that continually reads everything we type. I wondered why and if there were any rules of thumb we could use when proposing AI as a helpful assistant, rather than a creepy intruder.
Using AI to process faces provides an informative range of examples. Automated face recognition to find criminals has had a strong negative reaction in the UK and is being banned in an increasing number of other states. But I accepted the same technology letting me board a plane in the US last year; was happy to join the shorter queue for an automated face/passport verification to get airside; and I hadn’t even thought about the face detection involved every time we insert an alternative background into a video call.
This suggests there are two significant factors: whether the AI operates continuously or only at times I choose; and how limited its function seems to be. Ideally that should be a technological limit, though more general technology may be acceptable if there are strong policies and sanctions for misuse. In a video conference I choose whether or not to use face detection, and separating a picture from the background has few other uses. A boarding corridor that identifies passing faces and checks they are on the correct flight is time-limited, and I could opt for a human document check instead. So:
|Single-function||Face detection (to blur background)
Face verification (matches passport)
|Multi-function (potential)||Boarding gate
|Automated face recognition
Checking this model with AI that processes words, rather than faces, suggests that it fits there too. Smart “speakers” (actually, it’s the smart microphone function that’s spooky) are always on and potentially unlimited in purpose – the common concern is “what else is it doing?”; grammar checkers are always on, but limited to a single function; voice bots that try to help us through phone menus are limited in time (though frustrating if they don’t offer a human fallback option); asking a translator to work on a text gives us control over both what and when we disclose.
Other factors that seem to increase the sensation of creepiness include:
- Tasks that get too close to our identity as humans: e.g. identifying faces or emotions;
- Decisions we feel should only be made by humans: e.g. on early release from prison, in education course entry or continuation may be one of these;
- Systems that are, or may learn to be, discriminatory (even identifying gender may be problematic);
- Using information the individual might not have realised that we have, or not expected to be used in that way. Here a clear notice may satisfy the law, but may not remove the unease.
Any of these is likely to push our application down and to the right in the matrix, increasing the risk that it will be perceived as creepy. Conversely, the further up and left we can place our application, the more likely it will just be accepted for the contribution it makes to our daily lives.
For an excellent introduction to AI, with no mention of creepiness, see DSTL’s “Biscuit Book”.