Survey on 3D face reconstruction from uncalibrated images
- URL: http://arxiv.org/abs/2011.05740v2
- Date: Fri, 26 Feb 2021 08:32:32 GMT
- Title: Survey on 3D face reconstruction from uncalibrated images
- Authors: Araceli Morales, Gemma Piella and Federico M. Sukno
- Abstract summary: Despite providing a more accurate representation of the face, 3D facial images are more complex to acquire than 2D pictures.
The 3D-from-2D face reconstruction problem is ill-posed, thus prior knowledge is needed to restrict the solutions space.
We review 3D face reconstruction methods proposed in the last decade, focusing on those that only use 2D pictures captured under uncontrolled conditions.
- Score: 3.004265855622696
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recently, a lot of attention has been focused on the incorporation of 3D data
into face analysis and its applications. Despite providing a more accurate
representation of the face, 3D facial images are more complex to acquire than
2D pictures. As a consequence, great effort has been invested in developing
systems that reconstruct 3D faces from an uncalibrated 2D image. However, the
3D-from-2D face reconstruction problem is ill-posed, thus prior knowledge is
needed to restrict the solutions space. In this work, we review 3D face
reconstruction methods proposed in the last decade, focusing on those that only
use 2D pictures captured under uncontrolled conditions. We present a
classification of the proposed methods based on the technique used to add prior
knowledge, considering three main strategies, namely, statistical model
fitting, photometry, and deep learning, and reviewing each of them separately.
In addition, given the relevance of statistical 3D facial models as prior
knowledge, we explain the construction procedure and provide a list of the most
popular publicly available 3D facial models. After the exhaustive study of
3D-from-2D face reconstruction approaches, we observe that the deep learning
strategy is rapidly growing since the last few years, becoming the standard
choice in replacement of the widespread statistical model fitting. Unlike the
other two strategies, photometry-based methods have decreased in number due to
the need for strong underlying assumptions that limit the quality of their
reconstructions compared to statistical model fitting and deep learning
methods. The review also identifies current challenges and suggests avenues for
future research.
Related papers
- State of the Art in Dense Monocular Non-Rigid 3D Reconstruction [100.9586977875698]
3D reconstruction of deformable (or non-rigid) scenes from a set of monocular 2D image observations is a long-standing and actively researched area of computer vision and graphics.
This survey focuses on state-of-the-art methods for dense non-rigid 3D reconstruction of various deformable objects and composite scenes from monocular videos or sets of monocular views.
arXiv Detail & Related papers (2022-10-27T17:59:53Z) - 3D Magic Mirror: Clothing Reconstruction from a Single Image via a
Causal Perspective [96.65476492200648]
This research aims to study a self-supervised 3D clothing reconstruction method.
It recovers the geometry shape, and texture of human clothing from a single 2D image.
arXiv Detail & Related papers (2022-04-27T17:46:55Z) - Realistic face animation generation from videos [2.398608007786179]
3D face reconstruction and face alignment are two fundamental and highly related topics in computer vision.
Recently, some works start to use deep learning models to estimate the 3DMM coefficients to reconstruct 3D face geometry.
To address this problem, some end-to-end methods, which can completely bypass the calculation of 3DMM coefficients, are proposed.
arXiv Detail & Related papers (2021-03-27T20:18:14Z) - Model-based 3D Hand Reconstruction via Self-Supervised Learning [72.0817813032385]
Reconstructing a 3D hand from a single-view RGB image is challenging due to various hand configurations and depth ambiguity.
We propose S2HAND, a self-supervised 3D hand reconstruction network that can jointly estimate pose, shape, texture, and the camera viewpoint.
For the first time, we demonstrate the feasibility of training an accurate 3D hand reconstruction network without relying on manual annotations.
arXiv Detail & Related papers (2021-03-22T10:12:43Z) - Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification [9.159921061636695]
We propose a framework of 2D-aided deep 3D face identification.
In particular, we propose to reconstruct millions of 3D face scans from a large scale 2D face database.
Our proposed approach achieves state-of-the-art rank-1 scores on the FRGC v2.0, Bosphorus, and BU-3DFE 3D face databases.
arXiv Detail & Related papers (2020-10-16T13:48:38Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Neural Descent for Visual 3D Human Pose and Shape [67.01050349629053]
We present deep neural network methodology to reconstruct the 3d pose and shape of people, given an input RGB image.
We rely on a recently introduced, expressivefull body statistical 3d human model, GHUM, trained end-to-end.
Central to our methodology, is a learning to learn and optimize approach, referred to as HUmanNeural Descent (HUND), which avoids both second-order differentiation.
arXiv Detail & Related papers (2020-08-16T13:38:41Z) - Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware
Multi-view Geometry Consistency [40.56510679634943]
We propose a self-supervised training architecture by leveraging the multi-view geometry consistency.
We design three novel loss functions for multi-view consistency, including the pixel consistency loss, the depth consistency loss, and the facial landmark-based epipolar loss.
Our method is accurate and robust, especially under large variations of expressions, poses, and illumination conditions.
arXiv Detail & Related papers (2020-07-24T12:36:09Z) - Adaptive 3D Face Reconstruction from a Single Image [45.736818498242016]
We propose a novel joint 2D and 3D optimization method to adaptively reconstruct 3D face shapes from a single image.
Experimental results on multiple datasets demonstrate that our method can generate high-quality reconstruction from a single color image.
arXiv Detail & Related papers (2020-07-08T09:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.