Aussie Researchers Have Created Face-Controlled VR Tech, in a Massive Win for Accessibility

Aussie Researchers Have Created Face-Controlled VR Tech, in a Massive Win for Accessibility
Image: iStock

Researchers have worked out a way to interact with virtual reality using facial expressions.

The team of researchers, including scientists from Australia, New Zealand and India, are trying to adapt facial recognition technology so that it can be used in place of handheld controllers or touchpads within a virtual reality environment.

Controller inputs within virtual reality environments aren’t the most accessible things. The most widely-used headset, the Meta Quest 2, comes with two controllers, one for each hand, that require a grip utilising all of your fingers. The controller has buttons positioned around your grip in key interaction spots (your trigger finger, your ring and little finger grips and your thumb).

The technology could be improved with more accessible inputs, considering recent excitement around virtual reality, attached to the “metaverse” buzzword, but also for gaming and productivity applications.

This is why I’m so excited about this study. Researchers have managed to trigger specific actions in virtual reality environments with facial expressions as inputs. Neural processing techniques were used to observe changes in facial expressions via an Electroencephalography (EEG) headset. This basically means that the headset monitors electrical activity in the brain.

“The primary motivation of this work was to make the metaverse more accessible and inclusive,” says Doctor Arindam Dey from the University of Queensland, the leader of the project, the author of its journal article and the inventor of the technique.

“At the same time, facial expressions can also be used to enable interactions like kissing and blowing air within the virtual environments in a more realistic way than now.”

It’s a world first, effectively giving basic input control to the user in a VR headset via their mouth movements.

“A smile was used to trigger the ‘move’ command; a frown for the ‘stop’ command and a clench for the ‘action’ command, in place of a handheld controller performing these actions,” added Professor Mark Billinghurst from the University of South Australia.

“Essentially we are capturing common facial expressions such as anger, happiness and surprise and implementing them in a virtual reality environment.”

To support these face inputs, the researchers created environments in which the emotional states of users could be challenged. One such environment involved a task – to catch butterflies with a net.

In this environment, when a user smiled they moved, whereas when they frowned they stopped. In another environment where users had to pick objects up in a virtual reality workshop, creating a clenching facial expression allowed for an interaction input, which translated to objects being picked up.

“Overall, we expected the handheld controllers to perform better as they are a more intuitive method than facial expressions, however people reported feeling more immersed in the VR experiences controlled by facial expressions,” added Billinghurst.

Further research is set to make the technology more “user friendly”, according to Billinghurst, as although it gives the user a more realistic experience, it’s hard work for the brain.

The technology could also be adapted to reflect facial expressions within online VR environments, to show your emotions on your virtual model.

The findings of this study have been published in the International Journal of Human-Computer Studies.

This article has been updated since it was first published.