Categories
Articles

Srry, you woke me…

Recently I was in a video-conference where Apple’s “smart” assistant kept popping up on the presenter’s shared screen. Another delegate realised this happened whenever the word “theory” was spoken. It’s close…

These events – which I refer to as “false-wakes” – are privacy risk: maybe small, but that depends very much on the nature of the conversations that are going on around them. So it would be good if privacy law helped suppliers to reduce them.

However the recent guidance from European privacy regulators seems to have the opposite effect. They treat each “wake-word” as granting consent for the subsequent processing, which means that even detecting a false-wake involves unlawful processing, as it turns out (in retrospect) that the speaker had no intention of granting consent, so there was no lawful basis for the processing. And it makes it impossible to use the recorded data to tune the system to reduce future false-wakes, because the only possible response to not having a lawful basis is to delete the data immediately (and probably silently, to avoid admitting the law-breaking). So a position has been created where the Lawfulness Principle is actively discouraging compliance with the Accuracy one. Having written about using GDPR as a design tool, I wondered whether I could do better. Is there a way:

  • To avoid having to tolerate unlawful processing (even if time-limited);
  • To provide a mechanism whereby speakers can let their recordings be used to improve accuracy; and, incidentally
  • At least partially, to address the EDPB’s overall concerns about the invisibility of voice-triggered devices?

On closer reading, there’s only one lawful basis that can possibly cover a false-wake spoken by someone who hasn’t previously interacted with the device. Consent or contract might work for the person who installed the system, someone who has enrolled in its voice recognition process, or who intended to issue a command. But those require the speaker (as “data subject”) to have done, known or intended something. For accidental processing, perhaps of a visitor’s voice, the only possible lawful basis is “legitimate interest”.

That immediately triggers the notice obligation in Article 13, to provide information “at the time when personal data are obtained”. That probably shouldn’t mean reading out a complete privacy notice (unlikely to meet the requirement for intelligibility), but the device should draw attention to itself and where that information can be found. Suggestions for the latter have included QR codes or other layered notices.

To meet the storage limitation principle, and give some chance of satisfying the legitimate interests rights-balancing test, the legitimate interest in listening for a wake-word should terminate as soon as possible. However that might still leave enough time for the system to offer the speaker the choice of whether a short sound recording may be used for the new purpose of reducing the likelihood of future false-wakes. That is probably best offered as an opt-in consent dialogue (“Sorry, you woke me by mistake, may I process a five-second recording to work out why?”), where anything other than a clearly-spoken “yes” results in the recording being deleted. A further refinement might be to play the recording, so the speaker knows what will be shared.

So, yes, all three objectives can be achieved. Just pick the right legal basis, and the rest follows 🙂

By Andrew Cormack

I'm Chief Regulatory Advisor at Jisc, responsible for keeping an eye out for places where our ideas, services and products might raise regulatory issues. My aim is to fix either the product or service, or the regulation, before there's a painful bump!

Leave a Reply

Your email address will not be published. Required fields are marked *