A colleague spotted an article suggesting, among other things, that Virtual Reality could provide a safe space for students to practice their soft skills. This can, of course, be done by classroom roleplay but the possibility of making mistakes that fellow students will remember could well increase stress. This certainly chimes with feedback I received when suggesting that my team practised giving presentations in what seemed to me the “safe” environment of a company lunchtime chat: “no, we’d far rather have an audience of complete strangers”.
So what about an audience of avatars, whose memories can be wiped at the end of the session? The article suggests that AI can provide feedback on tone, body-language and eye-contact; even that the session could be replayed with the student taking another role and watching an avatar act out their behaviour.
But this gets ethically interesting. This sort of recording and feedback involves what would normally be considered “high-risk” uses of AI, particularly processing of faces and emotions. Conventional wisdom says that if that is to be done at all then there must be a lot of human oversight and involvement. But providing that involvement seems to break the private safe space, which was why we used VR in the first place. I was reminded of school language labs, where it was just me and the non-judgemental tape reels… Until the teacher’s voice suddenly burst in to my headset…
Giving the student the option to invite another human to view the recording seems fine: “I don’t understand the AI’s comments, please help”. But should the teacher listen in? Or intervene? What happens if the student starts to interact in ways that could harm themselves or others? There are also fascinating articles on how our interactions with devices can quickly become uncivil, because “it’s only a robot”. Can the VR system recognise those situations and, if so, what should it do?
Should “What happens in VR, stay in VR”? I don’t know…