Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360°
- URL: http://arxiv.org/abs/2408.00296v1
- Date: Thu, 1 Aug 2024 05:46:06 GMT
- Title: Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360°
- Authors: Yuxiao He, Yiyu Zhuang, Yanwen Wang, Yao Yao, Siyu Zhu, Xiaoyu Li, Qi Zhang, Xun Cao, Hao Zhu,
- Abstract summary: We build a dataset of artist-designed high-fidelity human heads and propose to create a novel parametric head model from it.
Our scheme decouples the facial motion/shape and facial appearance, which are represented by a classic parametric 3D mesh model and an attached neural texture.
Experiments show that facial motions and appearances are well disentangled in the parametric space, leading to SOTA performance in rendering and animating quality.
- Score: 25.86740659962933
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating a 360{\deg} parametric model of a human head is a very challenging task. While recent advancements have demonstrated the efficacy of leveraging synthetic data for building such parametric head models, their performance remains inadequate in crucial areas such as expression-driven animation, hairstyle editing, and text-based modifications. In this paper, we build a dataset of artist-designed high-fidelity human heads and propose to create a novel parametric 360{\deg} renderable parametric head model from it. Our scheme decouples the facial motion/shape and facial appearance, which are represented by a classic parametric 3D mesh model and an attached neural texture, respectively. We further propose a training method for decompositing hairstyle and facial appearance, allowing free-swapping of the hairstyle. A novel inversion fitting method is presented based on single image input with high generalization and fidelity. To the best of our knowledge, our model is the first parametric 3D full-head that achieves 360{\deg} free-view synthesis, image-based fitting, appearance editing, and animation within a single model. Experiments show that facial motions and appearances are well disentangled in the parametric space, leading to SOTA performance in rendering and animating quality. The code and SynHead100 dataset are released at https://nju-3dv.github.io/projects/Head360.
Related papers
- Synthetic Prior for Few-Shot Drivable Head Avatar Inversion [61.51887011274453]
We present SynShot, a novel method for the few-shot inversion of a drivable head avatar based on a synthetic prior.
Inspired by machine learning models trained solely on synthetic data, we propose a method that learns a prior model from a large dataset of synthetic heads.
We model the head avatar using 3D Gaussian splatting and a convolutional encoder-decoder that outputs Gaussian parameters in UV texture space.
arXiv Detail & Related papers (2025-01-12T19:01:05Z) - FaceLift: Single Image to 3D Head with View Generation and GS-LRM [54.24070918942727]
FaceLift is a feed-forward approach for rapid, high-quality, 360-degree head reconstruction from a single image.
We show that FaceLift outperforms state-of-the-art methods in 3D head reconstruction, highlighting its practical applicability and robust performance on real-world images.
arXiv Detail & Related papers (2024-12-23T18:59:49Z) - GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction [47.113910048252805]
High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
arXiv Detail & Related papers (2024-07-21T06:03:11Z) - MeGA: Hybrid Mesh-Gaussian Head Avatar for High-Fidelity Rendering and Head Editing [34.31657241047574]
We propose a Hybrid Mesh-Gaussian Head Avatar (MeGA) that models different head components with more suitable representations.
MeGA generates higher-fidelity renderings for the whole head and naturally supports more downstream tasks.
Experiments on the NeRSemble dataset demonstrate the effectiveness of our designs.
arXiv Detail & Related papers (2024-04-29T18:10:12Z) - HeadCraft: Modeling High-Detail Shape Variations for Animated 3DMMs [9.239372828746152]
Current advances in human head modeling allow the generation of plausible-looking 3D head models via neural representations.
We present a generative model for detailed 3D head meshes on top of an articulated 3DMM.
We train a StyleGAN model to generalize over the UV maps of displacements, which we later refer to as HeadCraft.
arXiv Detail & Related papers (2023-12-21T18:57:52Z) - HeadArtist: Text-conditioned 3D Head Generation with Self Score Distillation [95.58892028614444]
This work presents HeadArtist for 3D head generation from text descriptions.
We come up with an efficient pipeline that optimize a parameterized 3D head model under the supervision of the prior distillation.
Experimental results suggest that our approach delivers high-quality 3D head sculptures with adequate geometry and photorealistic appearance.
arXiv Detail & Related papers (2023-12-12T18:59:25Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.