i3DMM: Deep Implicit 3D Morphable Model of Human Heads
- URL: http://arxiv.org/abs/2011.14143v1
- Date: Sat, 28 Nov 2020 15:01:53 GMT
- Title: i3DMM: Deep Implicit 3D Morphable Model of Human Heads
- Authors: Tarun Yenamandra, Ayush Tewari, Florian Bernard, Hans-Peter Seidel,
Mohamed Elgharib, Daniel Cremers, Christian Theobalt
- Abstract summary: We present the first deep implicit 3D morphable model (i3DMM) of full heads.
It not only captures identity-specific geometry, texture, and expressions of the frontal face, but also models the entire head, including hair.
We show the merits of i3DMM using ablation studies, comparisons to state-of-the-art models, and applications such as semantic head editing and texture transfer.
- Score: 115.19943330455887
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present the first deep implicit 3D morphable model (i3DMM) of full heads.
Unlike earlier morphable face models it not only captures identity-specific
geometry, texture, and expressions of the frontal face, but also models the
entire head, including hair. We collect a new dataset consisting of 64 people
with different expressions and hairstyles to train i3DMM. Our approach has the
following favorable properties: (i) It is the first full head morphable model
that includes hair. (ii) In contrast to mesh-based models it can be trained on
merely rigidly aligned scans, without requiring difficult non-rigid
registration. (iii) We design a novel architecture to decouple the shape model
into an implicit reference shape and a deformation of this reference shape.
With that, dense correspondences between shapes can be learned implicitly. (iv)
This architecture allows us to semantically disentangle the geometry and color
components, as color is learned in the reference space. Geometry is further
disentangled as identity, expressions, and hairstyle, while color is
disentangled as identity and hairstyle components. We show the merits of i3DMM
using ablation studies, comparisons to state-of-the-art models, and
applications such as semantic head editing and texture transfer. We will make
our model publicly available.
Related papers
- HAAR: Text-Conditioned Generative Model of 3D Strand-based Human
Hairstyles [85.12672855502517]
We present HAAR, a new strand-based generative model for 3D human hairstyles.
Based on textual inputs, HAAR produces 3D hairstyles that could be used as production-level assets in modern computer graphics engines.
arXiv Detail & Related papers (2023-12-18T19:19:32Z) - Single-Shot Implicit Morphable Faces with Consistent Texture
Parameterization [91.52882218901627]
We propose a novel method for constructing implicit 3D morphable face models that are both generalizable and intuitive for editing.
Our method improves upon photo-realism, geometry, and expression accuracy compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T17:58:40Z) - FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable
Model from a Hybrid Dataset [36.688730105295015]
FaceVerse is built from hybrid East Asian face datasets containing 60K fused RGB-D images and 2K high-fidelity 3D head scan models.
In the coarse module, we generate a base parametric model from large-scale RGB-D images, which is able to predict accurate rough 3D face models in different genders, ages, etc.
In the fine module, a conditional StyleGAN architecture trained with high-fidelity scan models is introduced to enrich elaborate facial geometric and texture details.
arXiv Detail & Related papers (2022-03-26T12:13:14Z) - Identity-Expression Ambiguity in 3D Morphable Face Models [5.38250259923059]
We show that non-orthogonality of the variation in identity and expression can cause identity-expression ambiguity in 3D Morphable Models.
We demonstrate this effect with 3D shapes directly as well as through an inverse rendering task.
arXiv Detail & Related papers (2021-09-29T06:11:43Z) - imGHUM: Implicit Generative Models of 3D Human Shape and Articulated
Pose [42.4185273307021]
We present imGHUM, the first holistic generative model of 3D human shape and articulated pose.
We model the full human body implicitly as a function zero-level-set and without the use of an explicit template mesh.
arXiv Detail & Related papers (2021-08-24T17:08:28Z) - Detailed Avatar Recovery from Single Image [50.82102098057822]
This paper presents a novel framework to recover emphdetailed avatar from a single image.
We use the deep neural networks to refine the 3D shape in a Hierarchical Mesh Deformation framework.
Our method can restore detailed human body shapes with complete textures beyond skinned models.
arXiv Detail & Related papers (2021-08-06T03:51:26Z) - Building 3D Morphable Models from a Single Scan [3.472931603805115]
We propose a method for constructing generative models of 3D objects from a single 3D mesh.
Our method produces a 3D morphable model that represents shape and albedo in terms of Gaussian processes.
We show that our approach can be used to perform face recognition using only a single 3D scan.
arXiv Detail & Related papers (2020-11-24T23:08:14Z) - Learning Complete 3D Morphable Face Models from Images and Videos [88.34033810328201]
We present the first approach to learn complete 3D models of face identity geometry, albedo and expression just from images and videos.
We show that our learned models better generalize and lead to higher quality image-based reconstructions than existing approaches.
arXiv Detail & Related papers (2020-10-04T20:51:23Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.