Relightable Neural Actor with Intrinsic Decomposition and Pose Control
- URL: http://arxiv.org/abs/2312.11587v1
- Date: Mon, 18 Dec 2023 14:30:13 GMT
- Title: Relightable Neural Actor with Intrinsic Decomposition and Pose Control
- Authors: Diogo Luvizon and Vladislav Golyanik and Adam Kortylewski and Marc
Habermann and Christian Theobalt
- Abstract summary: We propose Relightable Neural Actor, the first video-based method for learning a neural human model that can be relighted.
We represent the geometry of the actor with a drivable density field that models pose-dependent clothing deformations.
We demonstrate state-of-the-art relighting results for novel human poses.
- Score: 85.89305777719495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating a digital human avatar that is relightable, drivable, and
photorealistic is a challenging and important problem in Vision and Graphics.
Humans are highly articulated creating pose-dependent appearance effects like
self-shadows and wrinkles, and skin as well as clothing require complex and
space-varying BRDF models. While recent human relighting approaches can recover
plausible material-light decompositions from multi-view video, they do not
generalize to novel poses and still suffer from visual artifacts. To address
this, we propose Relightable Neural Actor, the first video-based method for
learning a photorealistic neural human model that can be relighted, allows
appearance editing, and can be controlled by arbitrary skeletal poses.
Importantly, for learning our human avatar, we solely require a multi-view
recording of the human under a known, but static lighting condition. To achieve
this, we represent the geometry of the actor with a drivable density field that
models pose-dependent clothing deformations and provides a mapping between 3D
and UV space, where normal, visibility, and materials are encoded. To evaluate
our approach in real-world scenarios, we collect a new dataset with four actors
recorded under different light conditions, indoors and outdoors, providing the
first benchmark of its kind for human relighting, and demonstrating
state-of-the-art relighting results for novel human poses.
Related papers
- IntrinsicAvatar: Physically Based Inverse Rendering of Dynamic Humans from Monocular Videos via Explicit Ray Tracing [30.944495094789826]
We present IntrinsicAvatar, a novel approach to recovering the intrinsic properties of clothed human avatars from only monocular videos.
Our approach can recover high-quality geometry, albedo, material, and lighting properties of clothed humans from a single monocular video.
arXiv Detail & Related papers (2023-12-08T17:58:14Z) - VINECS: Video-based Neural Character Skinning [82.39776643541383]
We propose a fully automated approach for creating a fully rigged character with pose-dependent skinning weights.
We show that our approach outperforms state-of-the-art while not relying on dense 4D scans.
arXiv Detail & Related papers (2023-07-03T08:35:53Z) - Relightable Neural Human Assets from Multi-view Gradient Illuminations [39.70530019396583]
We present UltraStage, a new 3D human dataset that contains more than 2,000 high-quality human assets captured under both multi-view and multi-illumination settings.
Inspired by recent advances in neural representation, we interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions.
We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds.
arXiv Detail & Related papers (2022-12-15T08:06:03Z) - Neural Novel Actor: Learning a Generalized Animatable Neural
Representation for Human Actors [98.24047528960406]
We propose a new method for learning a generalized animatable neural representation from a sparse set of multi-view imagery of multiple persons.
The learned representation can be used to synthesize novel view images of an arbitrary person from a sparse set of cameras, and further animate them with the user's pose control.
arXiv Detail & Related papers (2022-08-25T07:36:46Z) - Relighting4D: Neural Relightable Human from Videos [32.32424947454304]
We propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations.
Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields.
The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization.
arXiv Detail & Related papers (2022-07-14T17:57:13Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z) - Neural Actor: Neural Free-view Synthesis of Human Actors with Pose
Control [80.79820002330457]
We propose a new method for high-quality synthesis of humans from arbitrary viewpoints and under arbitrary controllable poses.
Our method achieves better quality than the state-of-the-arts on playback as well as novel pose synthesis, and can even generalize well to new poses that starkly differ from the training poses.
arXiv Detail & Related papers (2021-06-03T17:40:48Z) - Style and Pose Control for Image Synthesis of Humans from a Single
Monocular View [78.6284090004218]
StylePoseGAN is a non-controllable generator to accept conditioning of pose and appearance separately.
Our network can be trained in a fully supervised way with human images to disentangle pose, appearance and body parts.
StylePoseGAN achieves state-of-the-art image generation fidelity on common perceptual metrics.
arXiv Detail & Related papers (2021-02-22T18:50:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.