HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs
- URL: http://arxiv.org/abs/2312.14140v1
- Date: Thu, 21 Dec 2023 18:57:52 GMT
- Title: HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs
- Authors: Artem Sevastopolsky, Philip-William Grassal, Simon Giebenhain,
ShahRukh Athar, Luisa Verdoliva, Matthias Niessner
- Abstract summary: We introduce a generative model for detailed 3D head meshes on top of an articulated 3DMM.
We train a StyleGAN model in order to generalize over the UV maps of displacements.
We demonstrate the results of unconditional generation and fitting to the full or partial observation.
- Score: 9.790185628415301
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current advances in human head modeling allow to generate plausible-looking
3D head models via neural representations. Nevertheless, constructing complete
high-fidelity head models with explicitly controlled animation remains an
issue. Furthermore, completing the head geometry based on a partial
observation, e.g. coming from a depth sensor, while preserving details is often
problematic for the existing methods. We introduce a generative model for
detailed 3D head meshes on top of an articulated 3DMM which allows explicit
animation and high-detail preservation at the same time. Our method is trained
in two stages. First, we register a parametric head model with vertex
displacements to each mesh of the recently introduced NPHM dataset of accurate
3D head scans. The estimated displacements are baked into a hand-crafted UV
layout. Second, we train a StyleGAN model in order to generalize over the UV
maps of displacements. The decomposition of the parametric model and
high-quality vertex displacements allows us to animate the model and modify it
semantically. We demonstrate the results of unconditional generation and
fitting to the full or partial observation. The project page is available at
https://seva100.github.io/headcraft.
Related papers
- Generating Editable Head Avatars with 3D Gaussian GANs [57.51487984425395]
Traditional 3D-aware generative adversarial networks (GANs) achieve photorealistic and view-consistent 3D head synthesis.
We propose a novel approach that enhances the editability and animation control of 3D head avatars by incorporating 3D Gaussian Splatting (3DGS) as an explicit 3D representation.
Our approach delivers high-quality 3D-aware synthesis with state-of-the-art controllability.
arXiv Detail & Related papers (2024-12-26T10:10:03Z) - Towards Native Generative Model for 3D Head Avatar [20.770534728078623]
We show how to learn a native generative model for 360$circ$ full head from a limited 3D head dataset.
Specifically, three major problems are studied: how to effectively utilize various representations for generating the 360$circ$-renderable human head.
We hope the proposed models and artist-designed dataset can inspire future research on learning native generative 3D head models from limited 3D datasets.
arXiv Detail & Related papers (2024-10-02T04:04:10Z) - Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360° [25.86740659962933]
We build a dataset of artist-designed high-fidelity human heads and propose to create a novel parametric head model from it.
Our scheme decouples the facial motion/shape and facial appearance, which are represented by a classic parametric 3D mesh model and an attached neural texture.
Experiments show that facial motions and appearances are well disentangled in the parametric space, leading to SOTA performance in rendering and animating quality.
arXiv Detail & Related papers (2024-08-01T05:46:06Z) - GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction [47.113910048252805]
High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
arXiv Detail & Related papers (2024-07-21T06:03:11Z) - HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation [95.58892028614444]
This work presents HeadArtist for 3D head generation from text descriptions.
We come up with an efficient pipeline that optimize a parameterized 3D head model under the supervision of the prior distillation.
Experimental results suggest that our approach delivers high-quality 3D head sculptures with adequate geometry and photorealistic appearance.
arXiv Detail & Related papers (2023-12-12T18:59:25Z) - HeadGaS: Real-Time Animatable Head Avatars via 3D Gaussian Splatting [11.849852156716171]
HeadGaS is a model that uses 3D Gaussian Splats (3DGS) for 3D head reconstruction and animation.
We demonstrate that HeadGaS delivers state-of-the-art results in real-time inference frame rates, surpassing baselines by up to 2dB.
arXiv Detail & Related papers (2023-12-05T17:19:22Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - Combining Implicit Function Learning and Parametric Models for 3D Human
Reconstruction [123.62341095156611]
Implicit functions represented as deep learning approximations are powerful for reconstructing 3D surfaces.
Such features are essential in building flexible models for both computer graphics and computer vision.
We present methodology that combines detail-rich implicit functions and parametric representations.
arXiv Detail & Related papers (2020-07-22T13:46:14Z) - Real-time Simultaneous 3D Head Modeling and Facial Motion Capture with
an RGB-D camera [2.3260877354419254]
We propose a method to build in real-time animated 3D head models using a consumer-grade RGB-D camera.
Anyone's head can be instantly reconstructed and his facial motion captured without requiring any training or pre-scanning.
arXiv Detail & Related papers (2020-04-22T13:22:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.