Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
- URL: http://arxiv.org/abs/2303.15892v1
- Date: Tue, 28 Mar 2023 11:12:26 GMT
- Title: Head3D: Complete 3D Head Generation via Tri-plane Feature Distillation
- Authors: Yuhao Cheng and Yichao Yan and Wenhan Zhu and Ye Pan and Bowen Pan and
Xiaokang Yang
- Abstract summary: Current full head generation methods require a large number of 3D scans or multi-view images to train the model.
We propose Head3D, a method to generate full 3D heads with limited multi-view images.
Our model achieves cost-efficient and diverse complete head generation with photo-realistic renderings and high-quality geometry representations.
- Score: 56.267877301135634
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Head generation with diverse identities is an important task in computer
vision and computer graphics, widely used in multimedia applications. However,
current full head generation methods require a large number of 3D scans or
multi-view images to train the model, resulting in expensive data acquisition
cost. To address this issue, we propose Head3D, a method to generate full 3D
heads with limited multi-view images. Specifically, our approach first extracts
facial priors represented by tri-planes learned in EG3D, a 3D-aware generative
model, and then proposes feature distillation to deliver the 3D frontal faces
into complete heads without compromising head integrity. To mitigate the domain
gap between the face and head models, we present dual-discriminators to guide
the frontal and back head generation, respectively. Our model achieves
cost-efficient and diverse complete head generation with photo-realistic
renderings and high-quality geometry representations. Extensive experiments
demonstrate the effectiveness of our proposed Head3D, both qualitatively and
quantitatively.
Related papers
- Towards Native Generative Model for 3D Head Avatar [20.770534728078623]
We show how to learn a native generative model for 360$circ$ full head from a limited 3D head dataset.
Specifically, three major problems are studied: how to effectively utilize various representations for generating the 360$circ$-renderable human head.
We hope the proposed models and artist-designed dataset can inspire future research on learning native generative 3D head models from limited 3D datasets.
arXiv Detail & Related papers (2024-10-02T04:04:10Z) - GPHM: Gaussian Parametric Head Model for Monocular Head Avatar Reconstruction [47.113910048252805]
High-fidelity 3D human head avatars are crucial for applications in VR/AR, digital human, and film production.
Recent advances have leveraged morphable face models to generate animated head avatars, representing varying identities and expressions.
We introduce 3D Gaussian Parametric Head Model, which employs 3D Gaussians to accurately represent the complexities of the human head.
arXiv Detail & Related papers (2024-07-21T06:03:11Z) - ID-to-3D: Expressive ID-guided 3D Heads via Score Distillation Sampling [96.87575334960258]
ID-to-3D is a method to generate identity- and text-guided 3D human heads with disentangled expressions.
Results achieve an unprecedented level of identity-consistent and high-quality texture and geometry generation.
arXiv Detail & Related papers (2024-05-26T13:36:45Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - PanoHead: Geometry-Aware 3D Full-Head Synthesis in 360$^{\circ}$ [17.355141949293852]
Existing 3D generative adversarial networks (GANs) for 3D human head synthesis are either limited to near-frontal views or hard to preserve 3D consistency in large view angles.
We propose PanoHead, the first 3D-aware generative model that enables high-quality view-consistent image synthesis of full heads in $360circ$ with diverse appearance and detailed geometry.
arXiv Detail & Related papers (2023-03-23T06:54:34Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Prior-Guided Multi-View 3D Head Reconstruction [28.126115947538572]
Previous multi-view stereo methods suffer from low-frequency structures such as unclear head structures and inaccurate reconstruction in hair regions.
To tackle this problem, we propose a prior-guided implicit neural rendering network.
The utilization of these priors can improve the reconstruction accuracy and robustness, leading to a high-quality integrated 3D head model.
arXiv Detail & Related papers (2021-07-09T07:43:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.