Generating Diverse 3D Reconstructions from a Single Occluded Face Image
- URL: http://arxiv.org/abs/2112.00879v1
- Date: Wed, 1 Dec 2021 23:13:49 GMT
- Title: Generating Diverse 3D Reconstructions from a Single Occluded Face Image
- Authors: Rahul Dey and Vishnu Naresh Boddeti
- Abstract summary: We present Diverse3DFace, which is designed to simultaneously generate a diverse and realistic set of 3D reconstructions from a single occluded face image.
On face images occluded by masks, glasses, and other random objects, Diverse3DFace generates a distribution of 3D shapes having 50% higher diversity on the occluded regions compared to the baselines.
- Score: 18.073864874996534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Occlusions are a common occurrence in unconstrained face images. Single image
3D reconstruction from such face images often suffers from corruption due to
the presence of occlusions. Furthermore, while a plurality of 3D
reconstructions is plausible in the occluded regions, existing approaches are
limited to generating only a single solution. To address both of these
challenges, we present Diverse3DFace, which is specifically designed to
simultaneously generate a diverse and realistic set of 3D reconstructions from
a single occluded face image. It consists of three components: a global+local
shape fitting process, a graph neural network-based mesh VAE, and a
Determinantal Point Process based diversity promoting iterative optimization
procedure. Quantitative and qualitative comparisons of 3D reconstruction on
occluded faces show that Diverse3DFace can estimate 3D shapes that are
consistent with the visible regions in the target image while exhibiting high,
yet realistic, levels of diversity on the occluded regions. On face images
occluded by masks, glasses, and other random objects, Diverse3DFace generates a
distribution of 3D shapes having ~50% higher diversity on the occluded regions
compared to the baselines. Moreover, our closest sample to the ground truth has
~40% lower MSE than the singular reconstructions by existing approaches.
Related papers
- OFER: Occluded Face Expression Reconstruction [16.06622406877353]
We introduce OFER, a novel approach for single image 3D face reconstruction that can generate plausible, diverse, and expressive 3D faces.
We propose a novel ranking mechanism that sorts the outputs of the shape diffusion network based on the predicted shape accuracy scores to select the best match.
arXiv Detail & Related papers (2024-10-29T00:21:26Z) - Single Image, Any Face: Generalisable 3D Face Generation [59.9369171926757]
We propose a novel model, Gen3D-Face, which generates 3D human faces with unconstrained single image input.
To the best of our knowledge, this is the first attempt and benchmark for creating photorealistic 3D human face avatars from single images.
arXiv Detail & Related papers (2024-09-25T14:56:37Z) - High-fidelity 3D GAN Inversion by Pseudo-multi-view Optimization [51.878078860524795]
We present a high-fidelity 3D generative adversarial network (GAN) inversion framework that can synthesize photo-realistic novel views.
Our approach enables high-fidelity 3D rendering from a single image, which is promising for various applications of AI-generated 3D content.
arXiv Detail & Related papers (2022-11-28T18:59:52Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Weakly-Supervised Multi-Face 3D Reconstruction [45.864415499303405]
We propose an effective end-to-end framework for multi-face 3D reconstruction.
We employ the same global camera model for the reconstructed faces in each image, which makes it possible to recover the relative head positions and orientations in the 3D scene.
arXiv Detail & Related papers (2021-01-06T13:15:21Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Adaptive 3D Face Reconstruction from a Single Image [45.736818498242016]
We propose a novel joint 2D and 3D optimization method to adaptively reconstruct 3D face shapes from a single image.
Experimental results on multiple datasets demonstrate that our method can generate high-quality reconstruction from a single color image.
arXiv Detail & Related papers (2020-07-08T09:35:26Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.