DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars
- URL: http://arxiv.org/abs/2203.15798v1
- Date: Tue, 29 Mar 2022 17:59:15 GMT
- Title: DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars
- Authors: Amit Raj, Umar Iqbal, Koki Nagano, Sameh Khamis, Pavlo Molchanov,
James Hays, Jan Kautz
- Abstract summary: We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
- Score: 92.37436369781692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Acquisition and creation of digital human avatars is an important problem
with applications to virtual telepresence, gaming, and human modeling. Most
contemporary approaches for avatar generation can be viewed either as 3D-based
methods, which use multi-view data to learn a 3D representation with appearance
(such as a mesh, implicit surface, or volume), or 2D-based methods which learn
photo-realistic renderings of avatars but lack accurate 3D representations. In
this work, we present, DRaCoN, a framework for learning full-body volumetric
avatars which exploits the advantages of both the 2D and 3D neural rendering
techniques. It consists of a Differentiable Rasterization module, DiffRas, that
synthesizes a low-resolution version of the target image along with additional
latent features guided by a parametric body model. The output of DiffRas is
then used as conditioning to our conditional neural 3D representation module
(c-NeRF) which generates the final high-res image along with body geometry
using volumetric rendering. While DiffRas helps in obtaining photo-realistic
image quality, c-NeRF, which employs signed distance fields (SDF) for 3D
representations, helps to obtain fine 3D geometric details. Experiments on the
challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms
state-of-the-art methods both in terms of error metrics and visual quality.
Related papers
- DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans [38.8751809679184]
We present DiffHuman, a probabilistic method for 3D human reconstruction from a single RGB image.
Our experiments show that DiffHuman can produce diverse and detailed reconstructions for the parts of the person that are unseen or uncertain in the input image.
arXiv Detail & Related papers (2024-03-30T22:28:29Z) - En3D: An Enhanced Generative Model for Sculpting 3D Humans from 2D
Synthetic Data [36.51674664590734]
We present En3D, an enhanced izable scheme for high-qualityd 3D human avatars.
Unlike previous works that rely on scarce 3D datasets or limited 2D collections with imbalance viewing angles and pose priors, our approach aims to develop a zero-shot 3D capable of producing 3D humans.
arXiv Detail & Related papers (2024-01-02T12:06:31Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - Articulated 3D Head Avatar Generation using Text-to-Image Diffusion
Models [107.84324544272481]
The ability to generate diverse 3D articulated head avatars is vital to a plethora of applications, including augmented reality, cinematography, and education.
Recent work on text-guided 3D object generation has shown great promise in addressing these needs.
We show that our diffusion-based articulated head avatars outperform state-of-the-art approaches for this task.
arXiv Detail & Related papers (2023-07-10T19:15:32Z) - AG3D: Learning to Generate 3D Avatars from 2D Image Collections [96.28021214088746]
We propose a new adversarial generative model of realistic 3D people from 2D images.
Our method captures shape and deformation of the body and loose clothing by adopting a holistic 3D generator.
We experimentally find that our method outperforms previous 3D- and articulation-aware methods in terms of geometry and appearance.
arXiv Detail & Related papers (2023-05-03T17:56:24Z) - Learning Personalized High Quality Volumetric Head Avatars from
Monocular RGB Videos [47.94545609011594]
We propose a method to learn a high-quality implicit 3D head avatar from a monocular RGB video captured in the wild.
Our hybrid pipeline combines the geometry prior and dynamic tracking of a 3DMM with a neural radiance field to achieve fine-grained control and photorealism.
arXiv Detail & Related papers (2023-04-04T01:10:04Z) - HQ3DAvatar: High Quality Controllable 3D Head Avatar [65.70885416855782]
This paper presents a novel approach to building highly photorealistic digital head avatars.
Our method learns a canonical space via an implicit function parameterized by a neural network.
At test time, our method is driven by a monocular RGB video.
arXiv Detail & Related papers (2023-03-25T13:56:33Z) - Next3D: Generative Neural Texture Rasterization for 3D-Aware Head
Avatars [36.4402388864691]
3D-aware generative adversarial networks (GANs) synthesize high-fidelity and multi-view-consistent facial images using only collections of single-view 2D imagery.
Recent efforts incorporate 3D Morphable Face Model (3DMM) to describe deformation in generative radiance fields either explicitly or implicitly.
We propose a novel 3D GAN framework for unsupervised learning of generative, high-quality and 3D-consistent facial avatars from unstructured 2D images.
arXiv Detail & Related papers (2022-11-21T06:40:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.