Sphere Face Model:A 3D Morphable Model with Hypersphere Manifold Latent
Space
- URL: http://arxiv.org/abs/2112.02238v1
- Date: Sat, 4 Dec 2021 04:28:53 GMT
- Title: Sphere Face Model:A 3D Morphable Model with Hypersphere Manifold Latent
Space
- Authors: Diqiong Jiang, Yiwei Jin, Fanglue Zhang, Zhe Zhu, Yun Zhang, Ruofeng
Tong, Min Tang
- Abstract summary: We propose a novel 3DMM for monocular face reconstruction, which can preserve both shape fidelity and identity consistency.
The core of our SFM is the basis matrix which can be used to reconstruct 3D face shapes.
It produces fidelity face shapes, and the shapes are consistent in challenging conditions in monocular face reconstruction.
- Score: 14.597212159819403
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D Morphable Models (3DMMs) are generative models for face shape and
appearance. However, the shape parameters of traditional 3DMMs satisfy the
multivariate Gaussian distribution while the identity embeddings satisfy the
hypersphere distribution, and this conflict makes it challenging for face
reconstruction models to preserve the faithfulness and the shape consistency
simultaneously. To address this issue, we propose the Sphere Face Model(SFM), a
novel 3DMM for monocular face reconstruction, which can preserve both shape
fidelity and identity consistency. The core of our SFM is the basis matrix
which can be used to reconstruct 3D face shapes, and the basic matrix is
learned by adopting a two-stage training approach where 3D and 2D training data
are used in the first and second stages, respectively. To resolve the
distribution mismatch, we design a novel loss to make the shape parameters have
a hyperspherical latent space. Extensive experiments show that SFM has high
representation ability and shape parameter space's clustering performance.
Moreover, it produces fidelity face shapes, and the shapes are consistent in
challenging conditions in monocular face reconstruction.
Related papers
- NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation [52.772319840580074]
3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints.
Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation.
We introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling.
arXiv Detail & Related papers (2024-03-27T04:09:34Z) - Learning Dense Correspondence for NeRF-Based Face Reenactment [24.072019889495966]
We propose a novel framework, which adopts tri-planes as fundamental NeRF representation and decomposes face tri-planes into three components: canonical tri-planes, identity deformations, and motion.
Our framework is the first method that achieves one-shot multi-view face reenactment without a 3D parametric model prior.
arXiv Detail & Related papers (2023-12-16T11:31:34Z) - Explorable Mesh Deformation Subspaces from Unstructured Generative
Models [53.23510438769862]
Deep generative models of 3D shapes often feature continuous latent spaces that can be used to explore potential variations.
We present a method to explore variations among a given set of landmark shapes by constructing a mapping from an easily-navigable 2D exploration space to a subspace of a pre-trained generative model.
arXiv Detail & Related papers (2023-10-11T18:53:57Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Disjoint Pose and Shape for 3D Face Reconstruction [4.096453902709292]
We propose an end-to-end pipeline that disjointly solves for pose and shape to make the optimization stable and accurate.
The proposed method achieves end-to-end topological consistency, enables iterative face pose refinement procedure, and show remarkable improvement on both quantitative and qualitative results.
arXiv Detail & Related papers (2023-08-26T15:18:32Z) - Michelangelo: Conditional 3D Shape Generation based on Shape-Image-Text
Aligned Latent Representation [47.945556996219295]
We present a novel alignment-before-generation approach to generate 3D shapes based on 2D images or texts.
Our framework comprises two models: a Shape-Image-Text-Aligned Variational Auto-Encoder (SITA-VAE) and a conditional Aligned Shape Latent Diffusion Model (ASLDM)
arXiv Detail & Related papers (2023-06-29T17:17:57Z) - Implicit Neural Head Synthesis via Controllable Local Deformation Fields [12.191729556779972]
We build on part-based implicit shape models that decompose a global deformation field into local ones.
Our novel formulation models multiple implicit deformation fields with local semantic rig-like control via 3DMM-based parameters.
Our formulation renders sharper locally controllable nonlinear deformations than previous implicit monocular approaches.
arXiv Detail & Related papers (2023-04-21T16:35:28Z) - Beyond 3DMM: Learning to Capture High-fidelity 3D Face Shape [77.95154911528365]
3D Morphable Model (3DMM) fitting has widely benefited face analysis due to its strong 3D priori.
Previous reconstructed 3D faces suffer from degraded visual verisimilitude due to the loss of fine-grained geometry.
This paper proposes a complete solution to capture the personalized shape so that the reconstructed shape looks identical to the corresponding person.
arXiv Detail & Related papers (2022-04-09T03:46:18Z) - {\phi}-SfT: Shape-from-Template with a Physics-Based Deformation Model [69.27632025495512]
Shape-from-Template (SfT) methods estimate 3D surface deformations from a single monocular RGB camera.
This paper proposes a new SfT approach explaining 2D observations through physical simulations.
arXiv Detail & Related papers (2022-03-22T17:59:57Z) - Dense Non-Rigid Structure from Motion: A Manifold Viewpoint [162.88686222340962]
Non-Rigid Structure-from-Motion (NRSfM) problem aims to recover 3D geometry of a deforming object from its 2D feature correspondences across multiple frames.
We show that our approach significantly improves accuracy, scalability, and robustness against noise.
arXiv Detail & Related papers (2020-06-15T09:15:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.