Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos
- URL: http://arxiv.org/abs/2304.01436v1
- Date: Tue, 4 Apr 2023 01:10:04 GMT
- Title: Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos
- Authors: Ziqian Bai, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang,
Di Qiu, Abhimitra Meka, Ruofei Du, Mingsong Dou, Sergio Orts-Escolano, Rohit
Pandey, Ping Tan, Thabo Beeler, Sean Fanello, Yinda Zhang
- Abstract summary: We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
- Score: 47.94545609011594
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a method to learn a high-quality implicit 3D head avatar from a
monocular RGB video captured in the wild. The learnt avatar is driven by a
parametric face model to achieve user-controlled facial expressions and head
poses. Our hybrid pipeline combines the geometry prior and dynamic tracking of
a 3DMM with a neural radiance field to achieve fine-grained control and
photorealism. To reduce over-smoothing and improve out-of-model expressions
synthesis, we propose to predict local features anchored on the 3DMM geometry.
These learnt features are driven by 3DMM deformation and interpolated in 3D
space to yield the volumetric radiance at a designated query point. We further
show that using a Convolutional Neural Network in the UV space is critical in
incorporating spatial context and producing representative local features.
Extensive experiments show that we are able to reconstruct high-quality
avatars, with more accurate expression-dependent details, good generalization
to out-of-training expressions, and quantitatively superior renderings compared
to other state-of-the-art approaches.
Related papers
- Expressive Gaussian Human Avatars from Monocular RGB Video [69.56388194249942]
We introduce EVA, a drivable human model that meticulously sculpts fine details based on 3D Gaussians and SMPL-X.
We highlight the critical importance of aligning the SMPL-X model with RGB frames for effective avatar learning.
We propose a context-aware adaptive density control strategy, which is adaptively adjusting the gradient thresholds.
arXiv Detail & Related papers (2024-07-03T15:36:27Z) - NPGA: Neural Parametric Gaussian Avatars [46.52887358194364]
We propose a data-driven approach to create high-fidelity controllable avatars from multi-view video recordings.
We build our method around 3D Gaussian splatting for its highly efficient rendering and to inherit the topological flexibility of point clouds.
We evaluate our method on the public NeRSemble dataset, demonstrating that NPGA significantly outperforms the previous state-of-the-art avatars on the self-reenactment task by 2.6 PSNR.
arXiv Detail & Related papers (2024-05-29T17:58:09Z) - VOODOO 3D: Volumetric Portrait Disentanglement for One-Shot 3D Head
Reenactment [17.372274738231443]
We present a 3D-aware one-shot head reenactment method based on a fully neural disentanglement framework for source appearance and driver expressions.
Our method is real-time and produces high-fidelity and view-consistent output, suitable for 3D teleconferencing systems based on holographic displays.
arXiv Detail & Related papers (2023-12-07T19:19:57Z) - Gaussian3Diff: 3D Gaussian Diffusion for 3D Full Head Synthesis and
Editing [53.05069432989608]
We present a novel framework for generating 3D human heads with remarkable flexibility.
Our method facilitates the creation of diverse and realistic 3D human heads with fine-grained editing over facial features and expressions.
arXiv Detail & Related papers (2023-12-05T19:05:58Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z) - Learning Neural Radiance Fields from Multi-View Geometry [1.1011268090482573]
We present a framework, called MVG-NeRF, that combines Multi-View Geometry algorithms and Neural Radiance Fields (NeRF) for image-based 3D reconstruction.
NeRF has revolutionized the field of implicit 3D representations, mainly due to a differentiable rendering formulation that enables high-quality and geometry-aware novel view synthesis.
arXiv Detail & Related papers (2022-10-24T08:53:35Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - HVTR: Hybrid Volumetric-Textural Rendering for Human Avatars [65.82222842213577]
We propose a novel neural rendering pipeline, which synthesizes virtual human avatars from arbitrary poses efficiently and at high quality.
First, we learn to encode articulated human motions on a dense UV manifold of the human body surface.
We then leverage the encoded information on the UV manifold to construct a 3D volumetric representation.
arXiv Detail & Related papers (2021-12-19T17:34:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.