Learning Compositional Radiance Fields of Dynamic Human Heads
- URL: http://arxiv.org/abs/2012.09955v1
- Date: Thu, 17 Dec 2020 22:19:27 GMT
- Title: Learning Compositional Radiance Fields of Dynamic Human Heads
- Authors: Ziyan Wang, Timur Bagautdinov, Stephen Lombardi, Tomas Simon, Jason
Saragih, Jessica Hodgins, Michael Zollh\"ofer
- Abstract summary: We propose a novel compositional 3D representation that combines the best of previous methods to produce both higher-resolution and faster results.
Differentiable volume rendering is employed to compute photo-realistic novel views of the human head and upper body.
Our approach achieves state-of-the-art results for synthesizing novel views of dynamic human heads and the upper body.
- Score: 13.272666180264485
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Photorealistic rendering of dynamic humans is an important ability for
telepresence systems, virtual shopping, synthetic data generation, and more.
Recently, neural rendering methods, which combine techniques from computer
graphics and machine learning, have created high-fidelity models of humans and
objects. Some of these methods do not produce results with high-enough fidelity
for driveable human models (Neural Volumes) whereas others have extremely long
rendering times (NeRF). We propose a novel compositional 3D representation that
combines the best of previous methods to produce both higher-resolution and
faster results. Our representation bridges the gap between discrete and
continuous volumetric representations by combining a coarse 3D-structure-aware
grid of animation codes with a continuous learned scene function that maps
every position and its corresponding local animation code to its view-dependent
emitted radiance and local volume density. Differentiable volume rendering is
employed to compute photo-realistic novel views of the human head and upper
body as well as to train our novel representation end-to-end using only 2D
supervision. In addition, we show that the learned dynamic radiance field can
be used to synthesize novel unseen expressions based on a global animation
code. Our approach achieves state-of-the-art results for synthesizing novel
views of dynamic human heads and the upper body.
Related papers
- Human Gaussian Splatting: Real-time Rendering of Animatable Avatars [8.719797382786464]
This work addresses the problem of real-time rendering of photorealistic human body avatars learned from multi-view videos.
We propose an animatable human model based on 3D Gaussian Splatting, that has recently emerged as a very efficient alternative to neural radiance fields.
Our method achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4 dataset while being able to render in real-time (80 fps for 512x512 resolution)
arXiv Detail & Related papers (2023-11-28T12:05:41Z) - HDHumans: A Hybrid Approach for High-fidelity Digital Humans [107.19426606778808]
HDHumans is the first method for HD human character synthesis that jointly produces an accurate and temporally coherent 3D deforming surface.
Our method is carefully designed to achieve a synergy between classical surface deformation and neural radiance fields (NeRF)
arXiv Detail & Related papers (2022-10-21T14:42:11Z) - Human Performance Modeling and Rendering via Neural Animated Mesh [40.25449482006199]
We bridge the traditional mesh with a new class of neural rendering.
In this paper, we present a novel approach for rendering human views from video.
We demonstrate our approach on various platforms, inserting virtual human performances into AR headsets.
arXiv Detail & Related papers (2022-09-18T03:58:00Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images [32.84481902544513]
This paper deals with rendering novel views and novel poses for a person unseen in training, using only multiview images as input.
Key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme.
Experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.
arXiv Detail & Related papers (2022-03-31T08:09:03Z) - DRaCoN -- Differentiable Rasterization Conditioned Neural Radiance
Fields for Articulated Avatars [92.37436369781692]
We present DRaCoN, a framework for learning full-body volumetric avatars.
It exploits the advantages of both the 2D and 3D neural rendering techniques.
Experiments on the challenging ZJU-MoCap and Human3.6M datasets indicate that DRaCoN outperforms state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T17:59:15Z) - 3D Neural Scene Representations for Visuomotor Control [78.79583457239836]
We learn models for dynamic 3D scenes purely from 2D visual observations.
A dynamics model, constructed over the learned representation space, enables visuomotor control for challenging manipulation tasks.
arXiv Detail & Related papers (2021-07-08T17:49:37Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - S3: Neural Shape, Skeleton, and Skinning Fields for 3D Human Modeling [103.65625425020129]
We represent the pedestrian's shape, pose and skinning weights as neural implicit functions that are directly learned from data.
We demonstrate the effectiveness of our approach on various datasets and show that our reconstructions outperform existing state-of-the-art methods.
arXiv Detail & Related papers (2021-01-17T02:16:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.