Reconstructing Recognizable 3D Face Shapes based on 3D Morphable Models
- URL: http://arxiv.org/abs/2104.03515v1
- Date: Thu, 8 Apr 2021 05:11:48 GMT
- Title: Reconstructing Recognizable 3D Face Shapes based on 3D Morphable Models
- Authors: Diqiong Jiang, Yiwei Jin, Risheng Deng, Ruofeng Tong, Fanglue Zhang,
Yukun Yai, Ming Tang
- Abstract summary: We propose a novel shape identity-aware regularization(SIR) loss for shape parameters, aiming at increasing discriminability in both the shape parameter and shape geometry domains.
We compare our method with existing methods in terms of the reconstruction error, visual distinguishability, and face recognition accuracy of the shape parameters.
- Score: 20.381926248856452
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many recent works have reconstructed distinctive 3D face shapes by
aggregating shape parameters of the same identity and separating those of
different people based on parametric models (e.g., 3D morphable models
(3DMMs)). However, despite the high accuracy in the face recognition task using
these shape parameters, the visual discrimination of face shapes reconstructed
from those parameters is unsatisfactory. The following research question has
not been answered in previous works: Do discriminative shape parameters
guarantee visual discrimination in represented 3D face shapes? This paper
analyzes the relationship between shape parameters and reconstructed shape
geometry and proposes a novel shape identity-aware regularization(SIR) loss for
shape parameters, aiming at increasing discriminability in both the shape
parameter and shape geometry domains. Moreover, to cope with the lack of
training data containing both landmark and identity annotations, we propose a
network structure and an associated training strategy to leverage mixed data
containing either identity or landmark labels. We compare our method with
existing methods in terms of the reconstruction error, visual
distinguishability, and face recognition accuracy of the shape parameters.
Experimental results show that our method outperforms the state-of-the-art
methods.
Related papers
- Multi-View Reconstruction using Signed Ray Distance Functions (SRDF) [22.75986869918975]
We investigate a new computational approach that builds on a novel shape representation that is volumetric.
The shape energy associated to this representation evaluates 3D geometry given color images and does not need appearance prediction.
In practice we propose an implicit shape representation, the SRDF, based on signed distances which we parameterize by depths along camera rays.
arXiv Detail & Related papers (2022-08-31T19:32:17Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Sphere Face Model:A 3D Morphable Model with Hypersphere Manifold Latent
Space [14.597212159819403]
We propose a novel 3DMM for monocular face reconstruction, which can preserve both shape fidelity and identity consistency.
The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes.
It produces fidelity face shapes, and the shapes are consistent in challenging conditions in monocular face reconstruction.
arXiv Detail & Related papers (2021-12-04T04:28:53Z) - 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch
Feature Swapping for Bodies and Faces [12.114711258010367]
We propose a self-supervised approach to train a 3D shape variational autoencoder which encourages a disentangled latent representation of identity features.
Experimental results conducted on 3D meshes show that state-of-the-art methods for latent disentanglement are not able to disentangle identity features of faces and bodies.
arXiv Detail & Related papers (2021-11-24T11:53:33Z) - Learning Canonical 3D Object Representation for Fine-Grained Recognition [77.33501114409036]
We propose a novel framework for fine-grained object recognition that learns to recover object variation in 3D space from a single image.
We represent an object as a composition of 3D shape and its appearance, while eliminating the effect of camera viewpoint.
By incorporating 3D shape and appearance jointly in a deep representation, our method learns the discriminative representation of the object.
arXiv Detail & Related papers (2021-08-10T12:19:34Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - From Points to Multi-Object 3D Reconstruction [71.17445805257196]
We propose a method to detect and reconstruct multiple 3D objects from a single RGB image.
A keypoint detector localizes objects as center points and directly predicts all object properties, including 9-DoF bounding boxes and 3D shapes.
The presented approach performs lightweight reconstruction in a single-stage, it is real-time capable, fully differentiable and end-to-end trainable.
arXiv Detail & Related papers (2020-12-21T18:52:21Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.