Facial Expression Recognition Under Partial Occlusion from Virtual
Reality Headsets based on Transfer Learning
- URL: http://arxiv.org/abs/2008.05563v1
- Date: Wed, 12 Aug 2020 20:25:07 GMT
- Title: Facial Expression Recognition Under Partial Occlusion from Virtual
Reality Headsets based on Transfer Learning
- Authors: Bita Houshmand, Naimul Khan
- Abstract summary: convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition task.
However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded.
We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Facial expressions of emotion are a major channel in our daily
communications, and it has been subject of intense research in recent years. To
automatically infer facial expressions, convolutional neural network based
approaches has become widely adopted due to their proven applicability to
Facial Expression Recognition (FER) task.On the other hand Virtual Reality (VR)
has gained popularity as an immersive multimedia platform, where FER can
provide enriched media experiences. However, recognizing facial expression
while wearing a head-mounted VR headset is a challenging task due to the upper
half of the face being completely occluded. In this paper we attempt to
overcome these issues and focus on facial expression recognition in presence of
a severe occlusion where the user is wearing a head-mounted display in a VR
setting. We propose a geometric model to simulate occlusion resulting from a
Samsung Gear VR headset that can be applied to existing FER datasets. Then, we
adopt a transfer learning approach, starting from two pretrained networks,
namely VGG and ResNet. We further fine-tune the networks on FER+ and RAF-DB
datasets. Experimental results show that our approach achieves comparable
results to existing methods while training on three modified benchmark datasets
that adhere to realistic occlusion resulting from wearing a commodity VR
headset. Code for this paper is available at:
https://github.com/bita-github/MRP-FER
Related papers
- EmojiHeroVR: A Study on Facial Expression Recognition under Partial Occlusion from Head-Mounted Displays [4.095418032380801]
EmoHeVRDB (EmojiHeroVR Database) includes 3,556 labeled facial images of 1,778 reenacted emotions.
EmojiHeVRDB includes data on the activations of 63 facial expressions captured via the Meta Quest Pro VR headset.
Best model achieved an accuracy of 69.84% on the test set.
arXiv Detail & Related papers (2024-10-04T11:29:04Z) - GeneFace: Generalized and High-Fidelity Audio-Driven 3D Talking Face
Synthesis [62.297513028116576]
GeneFace is a general and high-fidelity NeRF-based talking face generation method.
A head-aware torso-NeRF is proposed to eliminate the head-torso problem.
arXiv Detail & Related papers (2023-01-31T05:56:06Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Multiface: A Dataset for Neural Face Rendering [108.44505415073579]
In this work, we present Multiface, a new multi-view, high-resolution human face dataset.
We introduce Mugsy, a large scale multi-camera apparatus to capture high-resolution synchronized videos of a facial performance.
The goal of Multiface is to close the gap in accessibility to high quality data in the academic community and to enable research in VR telepresence.
arXiv Detail & Related papers (2022-07-22T17:55:39Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Real or Virtual? Using Brain Activity Patterns to differentiate Attended
Targets during Augmented Reality Scenarios [10.739605873338592]
We use machine learning techniques to classify electroencephalographic (EEG) data collected in Augmented Reality scenarios.
A shallow convolutional neural net classified 3 second data windows from 20 participants in a person-dependent manner.
arXiv Detail & Related papers (2021-01-12T19:08:39Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.