Identity-Expression Ambiguity in 3D Morphable Face Models
- URL: http://arxiv.org/abs/2109.14203v1
- Date: Wed, 29 Sep 2021 06:11:43 GMT
- Title: Identity-Expression Ambiguity in 3D Morphable Face Models
- Authors: Bernhard Egger, Skylar Sutherland, Safa C. Medin, Joshua Tenenbaum
- Abstract summary: We show that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models.
We demonstrate this effect with 3D shapes directly as well as through an inverse rendering task.
- Score: 5.38250259923059
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D Morphable Models are a class of generative models commonly used to model
faces. They are typically applied to ill-posed problems such as 3D
reconstruction from 2D data. Several ambiguities in this problem's image
formation process have been studied explicitly. We demonstrate that
non-orthogonality of the variation in identity and expression can cause
identity-expression ambiguity in 3D Morphable Models, and that in practice
expression and identity are far from orthogonal and can explain each other
surprisingly well. Whilst previously reported ambiguities only arise in an
inverse rendering setting, identity-expression ambiguity emerges in the 3D
shape generation process itself. We demonstrate this effect with 3D shapes
directly as well as through an inverse rendering task, and use two popular
models built from high quality 3D scans as well as a model built from a large
collection of 2D images and videos. We explore this issue's implications for
inverse rendering and observe that it cannot be resolved by a purely
statistical prior on identity and expression deformations.
Related papers
- Sculpt3D: Multi-View Consistent Text-to-3D Generation with Sparse 3D Prior [57.986512832738704]
We present a new framework Sculpt3D that equips the current pipeline with explicit injection of 3D priors from retrieved reference objects without re-training the 2D diffusion model.
Specifically, we demonstrate that high-quality and diverse 3D geometry can be guaranteed by keypoints supervision through a sparse ray sampling approach.
These two decoupled designs effectively harness 3D information from reference objects to generate 3D objects while preserving the generation quality of the 2D diffusion model.
arXiv Detail & Related papers (2024-03-14T07:39:59Z) - 4D Facial Expression Diffusion Model [3.507793603897647]
We introduce a generative framework for generating 3D facial expression sequences.
It is composed of two tasks: Learning the generative model that is trained over a set of 3D landmark sequences, and Generating 3D mesh sequences of an input facial mesh driven by the generated landmark sequences.
Experiments show that our model has learned to generate realistic, quality expressions solely from the dataset of relatively small size, improving over the state-of-the-art methods.
arXiv Detail & Related papers (2023-03-29T11:50:21Z) - CGOF++: Controllable 3D Face Synthesis with Conditional Generative
Occupancy Fields [52.14985242487535]
We propose a new conditional 3D face synthesis framework, which enables 3D controllability over generated face images.
At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh.
Experiments validate the effectiveness of the proposed method and show more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
arXiv Detail & Related papers (2022-11-23T19:02:50Z) - Controllable 3D Generative Adversarial Face Model via Disentangling
Shape and Appearance [63.13801759915835]
3D face modeling has been an active area of research in computer vision and computer graphics.
This paper proposes a new 3D face generative model that can decouple identity and expression.
arXiv Detail & Related papers (2022-08-30T13:40:48Z) - Disentangled3D: Learning a 3D Generative Model with Disentangled
Geometry and Appearance from Monocular Images [94.49117671450531]
State-of-the-art 3D generative models are GANs which use neural 3D volumetric representations for synthesis.
In this paper, we design a 3D GAN which can learn a disentangled model of objects, just from monocular observations.
arXiv Detail & Related papers (2022-03-29T22:03:18Z) - Building 3D Generative Models from Minimal Data [3.472931603805115]
We show that our approach can be used to perform face recognition using only a single 3D template (one scan total, not one per person)
We extend our model to a preliminary unsupervised learning framework that enables the learning of the distribution of 3D faces using one 3D template and a small number of 2D images.
arXiv Detail & Related papers (2022-03-04T20:10:50Z) - Building 3D Morphable Models from a Single Scan [3.472931603805115]
We propose a method for constructing generative models of 3D objects from a single 3D mesh.
Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes.
We show that our approach can be used to perform face recognition using only a single 3D scan.
arXiv Detail & Related papers (2020-11-24T23:08:14Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Canonical 3D Deformer Maps: Unifying parametric and non-parametric
methods for dense weakly-supervised category reconstruction [79.98689027127855]
We propose a new representation of the 3D shape of common object categories that can be learned from a collection of 2D images of independent objects.
Our method builds in a novel way on concepts from parametric deformation models, non-parametric 3D reconstruction, and canonical embeddings.
It achieves state-of-the-art results in dense 3D reconstruction on public in-the-wild datasets of faces, cars, and birds.
arXiv Detail & Related papers (2020-08-28T15:44:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.