Adaptive 3D Face Reconstruction from a Single Image
- URL: http://arxiv.org/abs/2007.03979v2
- Date: Sun, 13 Sep 2020 07:29:24 GMT
- Title: Adaptive 3D Face Reconstruction from a Single Image
- Authors: Kun Li, Jing Yang, Nianhong Jiao, Jinsong Zhang, and Yu-Kun Lai
- Abstract summary: We propose a novel joint 2D and 3D optimization method to adaptively reconstruct 3D face shapes from a single image.
Experimental results on multiple datasets demonstrate that our method can generate high-quality reconstruction from a single color image.
- Score: 45.736818498242016
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D face reconstruction from a single image is a challenging problem,
especially under partial occlusions and extreme poses. This is because the
uncertainty of the estimated 2D landmarks will affect the quality of face
reconstruction. In this paper, we propose a novel joint 2D and 3D optimization
method to adaptively reconstruct 3D face shapes from a single image, which
combines the depths of 3D landmarks to solve the uncertain detections of
invisible landmarks. The strategy of our method involves two aspects: a
coarse-to-fine pose estimation using both 2D and 3D landmarks, and an adaptive
2D and 3D re-weighting based on the refined pose parameter to recover accurate
3D faces. Experimental results on multiple datasets demonstrate that our method
can generate high-quality reconstruction from a single color image and is
robust for self-occlusion and large poses.
Related papers
- Disjoint Pose and Shape for 3D Face Reconstruction [4.096453902709292]
We propose an end-to-end pipeline that disjointly solves for pose and shape to make the optimization stable and accurate.
The proposed method achieves end-to-end topological consistency, enables iterative face pose refinement procedure, and show remarkable improvement on both quantitative and qualitative results.
arXiv Detail & Related papers (2023-08-26T15:18:32Z) - LIST: Learning Implicitly from Spatial Transformers for Single-View 3D
Reconstruction [5.107705550575662]
List is a novel neural architecture that leverages local and global image features to reconstruct geometric and topological structure of a 3D object from a single image.
We show the superiority of our model in reconstructing 3D objects from both synthetic and real-world images against the state of the art.
arXiv Detail & Related papers (2023-07-23T01:01:27Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - Deep-MDS Framework for Recovering the 3D Shape of 2D Landmarks from a
Single Image [8.368476827165114]
This paper proposes a framework to recover the 3D shape of 2D landmarks on a human face, in a single input image.
A deep neural network learns the pairwise dissimilarity among 2D landmarks, used by NMDS approach.
arXiv Detail & Related papers (2022-10-27T06:20:10Z) - Perspective Reconstruction of Human Faces by Joint Mesh and Landmark
Regression [89.8129467907451]
We propose to simultaneously reconstruct 3D face mesh in the world space and predict 2D face landmarks on the image plane.
Based on the predicted 3D and 2D landmarks, the 6DoF (6 Degrees Freedom) face pose can be easily estimated by the solver.
arXiv Detail & Related papers (2022-08-15T12:32:20Z) - RiCS: A 2D Self-Occlusion Map for Harmonizing Volumetric Objects [68.85305626324694]
Ray-marching in Camera Space (RiCS) is a new method to represent the self-occlusions of foreground objects in 3D into a 2D self-occlusion map.
We show that our representation map not only allows us to enhance the image quality but also to model temporally coherent complex shadow effects.
arXiv Detail & Related papers (2022-05-14T05:35:35Z) - Towards 3D Face Reconstruction in Perspective Projection: Estimating
6DoF Face Pose from Monocular Image [48.77844225075744]
In some scenarios that the face is very close to camera or moving along the camera axis, the methods suffer from the inaccurate reconstruction and unstable temporal fitting.
Deep neural network, Perspective Network (PerspNet), is proposed to simultaneously reconstruct 3D face shape in canonical space.
We contribute a large ARKitFace dataset to enable the training and evaluation of 3D face reconstruction solutions under the scenarios of perspective projection.
arXiv Detail & Related papers (2022-05-09T08:49:41Z) - 3D Magic Mirror: Clothing Reconstruction from a Single Image via a
Causal Perspective [96.65476492200648]
This research aims to study a self-supervised 3D clothing reconstruction method.
It recovers the geometry shape, and texture of human clothing from a single 2D image.
arXiv Detail & Related papers (2022-04-27T17:46:55Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.