Deep 3D Portrait from a Single Image
- URL: http://arxiv.org/abs/2004.11598v1
- Date: Fri, 24 Apr 2020 08:55:37 GMT
- Title: Deep 3D Portrait from a Single Image
- Authors: Sicheng Xu, Jiaolong Yang, Dong Chen, Fang Wen, Yu Deng, Yunde Jia,
Xin Tong
- Abstract summary: We present a learning-based approach for recovering the 3D geometry of human head from a single portrait image.
A two-step geometry learning scheme is proposed to learn 3D head reconstruction from in-the-wild face images.
We evaluate the accuracy of our method both in 3D and with pose manipulation tasks on 2D images.
- Score: 54.634207317528364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a learning-based approach for recovering the 3D
geometry of human head from a single portrait image. Our method is learned in
an unsupervised manner without any ground-truth 3D data.
We represent the head geometry with a parametric 3D face model together with
a depth map for other head regions including hair and ear. A two-step geometry
learning scheme is proposed to learn 3D head reconstruction from in-the-wild
face images, where we first learn face shape on single images using
self-reconstruction and then learn hair and ear geometry using pairs of images
in a stereo-matching fashion. The second step is based on the output of the
first to not only improve the accuracy but also ensure the consistency of
overall head geometry.
We evaluate the accuracy of our method both in 3D and with pose manipulation
tasks on 2D images. We alter pose based on the recovered geometry and apply a
refinement network trained with adversarial learning to ameliorate the
reprojected images and translate them to the real image domain. Extensive
evaluations and comparison with previous methods show that our new method can
produce high-fidelity 3D head geometry and head pose manipulation results.
Related papers
- AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360$^{\circ}$ [17.355141949293852]
Existing 3D generative adversarial networks (GANs) for 3D human head synthesis are either limited to near-frontal views or hard to preserve 3D consistency in large view angles.
We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in $360circ$ with diverse appearance and detailed geometry.
arXiv Detail & Related papers (2023-03-23T06:54:34Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Prior-Guided Multi-View 3D Head Reconstruction [28.126115947538572]
Previous multi-view stereo methods suffer from low-frequency structures such as unclear head structures and inaccurate reconstruction in hair regions.
To tackle this problem, we propose a prior-guided implicit neural rendering network.
The utilization of these priors can improve the reconstruction accuracy and robustness, leading to a high-quality integrated 3D head model.
arXiv Detail & Related papers (2021-07-09T07:43:56Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Do 2D GANs Know 3D Shape? Unsupervised 3D shape reconstruction from 2D
Image GANs [156.1209884183522]
State-of-the-art 2D generative models like GANs show unprecedented quality in modeling the natural image manifold.
We present the first attempt to directly mine 3D geometric cues from an off-the-shelf 2D GAN that is trained on RGB images only.
arXiv Detail & Related papers (2020-11-02T09:38:43Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Learning to Detect 3D Reflection Symmetry for Single-View Reconstruction [32.14605731030579]
3D reconstruction from a single RGB image is a challenging problem in computer vision.
Previous methods are usually solely data-driven, which lead to inaccurate 3D shape recovery and limited generalization capability.
We present a geometry-based end-to-end deep learning framework that first detects the mirror plane of reflection symmetry that commonly exists in man-made objects and then predicts depth maps by finding the intra-image pixel-wise correspondence of the symmetry.
arXiv Detail & Related papers (2020-06-17T17:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.