Facial De-occlusion Network for Virtual Telepresence Systems
- URL: http://arxiv.org/abs/2210.12622v1
- Date: Sun, 23 Oct 2022 05:34:17 GMT
- Title: Facial De-occlusion Network for Virtual Telepresence Systems
- Authors: Surabhi Gupta and Ashwath Shetty and Avinash Sharma
- Abstract summary: State-of-the-art image inpainting methods for de-occluding the eye region does not give usable results.
We propose a working solution that gives usable results to tackle this problem enabling the use of the real-time photo-realistic de-occluded face of the user in VR settings.
- Score: 6.501857679289835
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To see what is not in the image is one of the broader missions of computer
vision. Technology to inpaint images has made significant progress with the
coming of deep learning. This paper proposes a method to tackle occlusion
specific to human faces. Virtual presence is a promising direction in
communication and recreation for the future. However, Virtual Reality (VR)
headsets occlude a significant portion of the face, hindering the
photo-realistic appearance of the face in the virtual world. State-of-the-art
image inpainting methods for de-occluding the eye region does not give usable
results. To this end, we propose a working solution that gives usable results
to tackle this problem enabling the use of the real-time photo-realistic
de-occluded face of the user in VR settings.
Related papers
- VIGFace: Virtual Identity Generation Model for Face Image Synthesis [13.81887339529775]
We propose VIGFace, a novel framework capable of generating synthetic facial images.
It allows for creating virtual facial images without concerns about portrait rights.
It serves as an effective augmentation method by incorporating real existing images.
arXiv Detail & Related papers (2024-03-13T06:11:41Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - NeuralPassthrough: Learned Real-Time View Synthesis for VR [3.907767419763815]
We propose the first learned passthrough method and assess its performance using a custom VR headset with a stereo pair of RGB cameras.
We demonstrate that our learned passthrough method delivers superior image quality compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-07-05T17:39:22Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Pixel Codec Avatars [99.36561532588831]
Pixel Codec Avatars (PiCA) is a deep generative model of 3D human faces.
On a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene.
arXiv Detail & Related papers (2021-04-09T23:17:36Z) - High-fidelity Face Tracking for AR/VR via Deep Lighting Adaptation [117.32310997522394]
3D video avatars can empower virtual communications by providing compression, privacy, entertainment, and a sense of presence in AR/VR.
Existing person-specific 3D models are not robust to lighting, hence their results typically miss subtle facial behaviors and cause artifacts in the avatar.
This paper addresses previous limitations by learning a deep learning lighting model, that in combination with a high-quality 3D face tracking algorithm, provides a method for subtle and robust facial motion transfer from a regular video to a 3D photo-realistic avatar.
arXiv Detail & Related papers (2021-03-29T18:33:49Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - Facial Expression Recognition Under Partial Occlusion from Virtual
Reality Headsets based on Transfer Learning [0.0]
convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition task.
However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded.
We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets.
arXiv Detail & Related papers (2020-08-12T20:25:07Z) - State of the Art on Neural Rendering [141.22760314536438]
We focus on approaches that combine classic computer graphics techniques with deep generative models to obtain controllable and photo-realistic outputs.
This report is focused on the many important use cases for the described algorithms such as novel view synthesis, semantic photo manipulation, facial and body reenactment, relighting, free-viewpoint video, and the creation of photo-realistic avatars for virtual and augmented reality telepresence.
arXiv Detail & Related papers (2020-04-08T04:36:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.