Prior-Guided Multi-View 3D Head Reconstruction
- URL: http://arxiv.org/abs/2107.04277v1
- Date: Fri, 9 Jul 2021 07:43:56 GMT
- Title: Prior-Guided Multi-View 3D Head Reconstruction
- Authors: Xueying Wang, Yudong Guo, Zhongqi Yang and Juyong Zhang
- Abstract summary: Previous multi-view stereo methods suffer from low-frequency structures such as unclear head structures and inaccurate reconstruction in hair regions.
To tackle this problem, we propose a prior-guided implicit neural rendering network.
The utilization of these priors can improve the reconstruction accuracy and robustness, leading to a high-quality integrated 3D head model.
- Score: 28.126115947538572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering a 3D head model including the complete face and hair regions is
still a challenging problem in computer vision and graphics. In this paper, we
consider this problem with a few multi-view portrait images as input. Previous
multi-view stereo methods, either based on the optimization strategies or deep
learning techniques, suffer from low-frequency geometric structures such as
unclear head structures and inaccurate reconstruction in hair regions. To
tackle this problem, we propose a prior-guided implicit neural rendering
network. Specifically, we model the head geometry with a learnable signed
distance field (SDF) and optimize it via an implicit differentiable renderer
with the guidance of some human head priors, including the facial prior
knowledge, head semantic segmentation information and 2D hair orientation maps.
The utilization of these priors can improve the reconstruction accuracy and
robustness, leading to a high-quality integrated 3D head model. Extensive
ablation studies and comparisons with state-of-the-art methods demonstrate that
our method could produce high-fidelity 3D head geometries with the guidance of
these priors.
Related papers
- HeadRecon: High-Fidelity 3D Head Reconstruction from Monocular Video [37.53752896927615]
We study the reconstruction of high-fidelity 3D head models from arbitrary monocular videos.
We propose a prior-guided dynamic implicit neural network to tackle these problems.
arXiv Detail & Related papers (2023-12-14T12:38:56Z) - Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation [56.267877301135634]
Current full head generation methods require a large number of 3D scans or multi-view images to train the model.
We propose Head3D, a method to generate full 3D heads with limited multi-view images.
Our model achieves cost-efficient and diverse complete head generation with photo-realistic renderings and high-quality geometry representations.
arXiv Detail & Related papers (2023-03-28T11:12:26Z) - PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360$^{\circ}$ [17.355141949293852]
Existing 3D generative adversarial networks (GANs) for 3D human head synthesis are either limited to near-frontal views or hard to preserve 3D consistency in large view angles.
We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in $360circ$ with diverse appearance and detailed geometry.
arXiv Detail & Related papers (2023-03-23T06:54:34Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - SIDER: Single-Image Neural Optimization for Facial Geometric Detail
Recovery [54.64663713249079]
SIDER is a novel photometric optimization method that recovers detailed facial geometry from a single image in an unsupervised manner.
In contrast to prior work, SIDER does not rely on any dataset priors and does not require additional supervision from multiple views, lighting changes or ground truth 3D shape.
arXiv Detail & Related papers (2021-08-11T22:34:53Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Hybrid Approach for 3D Head Reconstruction: Using Neural Networks and
Visual Geometry [3.970492757288025]
We present a novel method for reconstructing 3D heads from a single or multiple image(s) using a hybrid approach based on deep learning and geometric techniques.
We propose an encoder-decoder network based on the U-net architecture and trained on synthetic data only.
arXiv Detail & Related papers (2021-04-28T11:31:35Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Deep 3D Portrait from a Single Image [54.634207317528364]
We present a learning-based approach for recovering the 3D geometry of human head from a single portrait image.
A two-step geometry learning scheme is proposed to learn 3D head reconstruction from in-the-wild face images.
We evaluate the accuracy of our method both in 3D and with pose manipulation tasks on 2D images.
arXiv Detail & Related papers (2020-04-24T08:55:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.