Relighting4D: Neural Relightable Human from Videos
- URL: http://arxiv.org/abs/2207.07104v1
- Date: Thu, 14 Jul 2022 17:57:13 GMT
- Title: Relighting4D: Neural Relightable Human from Videos
- Authors: Zhaoxi Chen and Ziwei Liu
- Abstract summary: We propose a principled framework, Relighting4D, that enables free-viewpoints relighting from only human videos under unknown illuminations.
Our key insight is that the space-time varying geometry and reflectance of the human body can be decomposed as a set of neural fields.
The whole framework can be learned from videos in a self-supervised manner, with physically informed priors designed for regularization.
- Score: 32.32424947454304
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human relighting is a highly desirable yet challenging task. Existing works
either require expensive one-light-at-a-time (OLAT) captured data using light
stage or cannot freely change the viewpoints of the rendered body. In this
work, we propose a principled framework, Relighting4D, that enables
free-viewpoints relighting from only human videos under unknown illuminations.
Our key insight is that the space-time varying geometry and reflectance of the
human body can be decomposed as a set of neural fields of normal, occlusion,
diffuse, and specular maps. These neural fields are further integrated into
reflectance-aware physically based rendering, where each vertex in the neural
field absorbs and reflects the light from the environment. The whole framework
can be learned from videos in a self-supervised manner, with physically
informed priors designed for regularization. Extensive experiments on both real
and synthetic datasets demonstrate that our framework is capable of relighting
dynamic human actors with free-viewpoints.
Related papers
- Relightable Neural Actor with Intrinsic Decomposition and Pose Control [80.06094206522668]
We propose Relightable Neural Actor, a new video-based method for learning a pose-driven neural human model that can be relighted.
For training, our method solely requires a multi-view recording of the human under a known, but static lighting condition.
To evaluate our approach in real-world scenarios, we collect a new dataset with four identities recorded under different light conditions, indoors and outdoors.
arXiv Detail & Related papers (2023-12-18T14:30:13Z) - Relightable and Animatable Neural Avatar from Sparse-View Video [66.77811288144156]
This paper tackles the challenge of creating relightable and animatable neural avatars from sparse-view (or even monocular) videos of dynamic humans under unknown illumination.
arXiv Detail & Related papers (2023-08-15T17:42:39Z) - Relightable Neural Human Assets from Multi-view Gradient Illuminations [39.70530019396583]
We present UltraStage, a new 3D human dataset that contains more than 2,000 high-quality human assets captured under both multi-view and multi-illumination settings.
Inspired by recent advances in neural representation, we interpret each example into a neural human asset which allows novel view synthesis under arbitrary lighting conditions.
We show our neural human assets can achieve extremely high capture performance and are capable of representing fine details such as facial wrinkles and cloth folds.
arXiv Detail & Related papers (2022-12-15T08:06:03Z) - Physically-Based Editing of Indoor Scene Lighting from a Single Image [106.60252793395104]
We present a method to edit complex indoor lighting from a single image with its predicted depth and light source segmentation masks.
We tackle this problem using two novel components: 1) a holistic scene reconstruction method that estimates scene reflectance and parametric 3D lighting, and 2) a neural rendering framework that re-renders the scene from our predictions.
arXiv Detail & Related papers (2022-05-19T06:44:37Z) - Neural Human Performer: Learning Generalizable Radiance Fields for Human
Performance Rendering [34.80975358673563]
We propose a novel approach that learns generalizable neural radiance fields based on a parametric human body model for robust performance capture.
Experiments on the ZJU-MoCap and AIST datasets show that our method significantly outperforms recent generalizable NeRF methods on unseen identities and poses.
arXiv Detail & Related papers (2021-09-15T17:32:46Z) - Animatable Neural Radiance Fields from Monocular RGB Video [72.6101766407013]
We present animatable neural radiance fields for detailed human avatar creation from monocular videos.
Our approach extends neural radiance fields to the dynamic scenes with human movements via introducing explicit pose-guided deformation.
In experiments we show that the proposed approach achieves 1) implicit human geometry and appearance reconstruction with high-quality details, 2) photo-realistic rendering of the human from arbitrary views, and 3) animation of the human with arbitrary poses.
arXiv Detail & Related papers (2021-06-25T13:32:23Z) - NeRFactor: Neural Factorization of Shape and Reflectance Under an
Unknown Illumination [60.89737319987051]
We address the problem of recovering shape and spatially-varying reflectance of an object from posed multi-view images of the object illuminated by one unknown lighting condition.
This enables the rendering of novel views of the object under arbitrary environment lighting and editing of the object's material properties.
arXiv Detail & Related papers (2021-06-03T16:18:01Z) - STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in
Motion with Neural Rendering [9.600908665766465]
We present STaR, a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation.
We show that our method can render photorealistic novel views, where novelty is measured on both spatial and temporal axes.
arXiv Detail & Related papers (2020-12-22T23:45:28Z) - Neural Reflectance Fields for Appearance Acquisition [61.542001266380375]
We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene.
We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light.
arXiv Detail & Related papers (2020-08-09T22:04:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.