What makes AI creepy?

There seems to be a widespread perception that “AI is creepy”. But at the same time as reacting strongly against an app that would check social media posts for signs that we were struggling to cope, we don’t think twice about the grammar checker that continually reads everything we type. I wondered why and if there were any rules of thumb we could use when proposing AI as a helpful assistant, rather than a creepy intruder.

Using AI to process faces provides an informative range of examples. Automated face recognition to find criminals has had a strong negative reaction in the UK and is being banned in an increasing number of other states. But I accepted the same technology letting me board a plane in the US last year; was happy to join the shorter queue for an automated face/passport verification to get airside; and I hadn’t even thought about the face detection involved every time we insert an alternative background into a video call.

This suggests there are two significant factors: whether the AI operates continuously or only at times I choose; and how limited its function seems to be. Ideally that should be a technological limit, though more general technology may be acceptable if there are strong policies and sanctions for misuse. In a video conference I choose whether or not to use face detection, and separating a picture from the background has few other uses. A boarding corridor that identifies passing faces and checks they are on the correct flight is time-limited, and I could opt for a human document check instead. So:

  Time-limited Always-on
Single-function Face detection (to blur background)

Face verification (matches passport)

AI translation

Grammar checker
Multi-function (potential) Boarding gate


Automated face recognition

Smart speaker

Checking this model with AI that processes words, rather than faces, suggests that it fits there too. Smart “speakers” (actually, it’s the smart microphone function that’s spooky) are always on and potentially unlimited in purpose – the common concern is “what else is it doing?”; grammar checkers are always on, but limited to a single function; voice bots that try to help us through phone menus are limited in time (though frustrating if they don’t offer a human fallback option); asking a translator to work on a text gives us control over both what and when we disclose.

Other factors that seem to increase the sensation of creepiness include:

  • Tasks that get too close to our identity as humans: e.g. identifying faces or emotions;
  • Decisions we feel should only be made by humans: e.g. on early release from prison, in education course entry or continuation may be one of these;
  • Systems that are, or may learn to be, discriminatory (even identifying gender may be problematic);
  • Using information the individual might not have realised that we have, or not expected to be used in that way. Here a clear notice may satisfy the law, but may not remove the unease.

Any of these is likely to push our application down and to the right in the matrix, increasing the risk that it will be perceived as creepy. Conversely, the further up and left we can place our application, the more likely it will just be accepted for the contribution it makes to our daily lives.

For an excellent introduction to AI, with no mention of creepiness, see DSTL’s “Biscuit Book”.

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

4 replies on “What makes AI creepy?”

Hi Andrew, I think there is an additional factor. I think there is some anchoring going on from how people first encounter the technology. I’ve no evidence at all for this, but when the initial instance is positive, and especially under the control of the person initiating, then it is likely to continue to be more positively viewed. The examples you give are positive, helpful instances. Whereas if the first engagement is ‘done to’ or a scare story in some new outlet, then I suspect the negative will continue to resonate.

Dan, I’m sure you’re right. My sister still hasn’t recovered from me driving her Siri/Alexa from the far end of a Facetime! Not sure if she attributes the spookiness to it, or me, though…

The really funny thing is how often I tell the story and a device pipes up in the background! Did it to my boss last week…

Leave a Reply

Your email address will not be published. Required fields are marked *