An interesting virtual water-cooler discussion with colleagues who are exploring the potential of AI as a Service. They tested a selection of easily available cloud face-processing systems on a recording of one of our internal Zoom meetings, and were startled by the results.
Face identification wasn’t a surprise: everyone who has changed the background on a conference call has used software to pick out a face from a background. Identifying other objects in the picture, estimating age and gender we were expecting. But the ability to attribute names (by comparing with publicly available photographs) and emotions is much more striking when you see it done to you, rather than just described.
It’s not always accurate, of course. My age varied by 10 years depending on lighting level, and was reduced by 50 when I tried on my COVID mask! The exec who was speaking was mostly “angry” (we think he’d prefer “passionate”) and he might have hoped fewer of us would be “neutral” or “sad” while listening. He did manage to change the emotion assigned to a colleague’s background picture of Mount Rushmore, though!
Face recognition and emotion detection could, of course, be valuable assistive technologies for those who have difficulty doing those things themselves. But they have also been banned by law in some states and contexts. Faces are especially sensitive, and specially protected, parts of our humanity.
So, some questions to consider before using any of these technologies:
- What are we trying to achieve? Can it be done another way? Why aren’t we doing that instead?
- Can we ensure everyone supports this? How will they respond if they don’t?
- What’s the risk, and how will we handle it, when (not if) the results come out wrong or discriminatory?
- Are we setting an example we’d be happy for others to follow? All the technology we used is available for students, staff, friends, colleagues and strangers to apply and share to pictures and videos of us, too.