Space-time Neural Irradiance Fields for Free-Viewpoint Video
- URL: http://arxiv.org/abs/2011.12950v2
- Date: Fri, 18 Jun 2021 20:42:30 GMT
- Title: Space-time Neural Irradiance Fields for Free-Viewpoint Video
- Authors: Wenqi Xian, Jia-Bin Huang, Johannes Kopf, Changil Kim
- Abstract summary: We present a method that learns a neural irradiance field for dynamic scenes from a single video.
Our learned representation enables free-view rendering of the input video.
- Score: 54.436478702701244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a method that learns a spatiotemporal neural irradiance field for
dynamic scenes from a single video. Our learned representation enables
free-viewpoint rendering of the input video. Our method builds upon recent
advances in implicit representations. Learning a spatiotemporal irradiance
field from a single video poses significant challenges because the video
contains only one observation of the scene at any point in time. The 3D
geometry of a scene can be legitimately represented in numerous ways since
varying geometry (motion) can be explained with varying appearance and vice
versa. We address this ambiguity by constraining the time-varying geometry of
our dynamic scene representation using the scene depth estimated from video
depth estimation methods, aggregating contents from individual frames into a
single global representation. We provide an extensive quantitative evaluation
and demonstrate compelling free-viewpoint rendering results.
Related papers
- NeuPhysics: Editable Neural Geometry and Physics from Monocular Videos [82.74918564737591]
We present a method for learning 3D geometry and physics parameters of a dynamic scene from only a monocular RGB video input.
Experiments show that our method achieves superior mesh and video reconstruction of dynamic scenes compared to competing Neural Field approaches.
arXiv Detail & Related papers (2022-10-22T04:57:55Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - STaR: Self-supervised Tracking and Reconstruction of Rigid Objects in
Motion with Neural Rendering [9.600908665766465]
We present STaR, a novel method that performs Self-supervised Tracking and Reconstruction of dynamic scenes with rigid motion from multi-view RGB videos without any manual annotation.
We show that our method can render photorealistic novel views, where novelty is measured on both spatial and temporal axes.
arXiv Detail & Related papers (2020-12-22T23:45:28Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - Neural Sparse Voxel Fields [151.20366604586403]
We introduce Neural Sparse Voxel Fields (NSVF), a new neural scene representation for fast and high-quality free-viewpoint rendering.
NSVF defines a set of voxel-bounded implicit fields organized in a sparse voxel octree to model local properties in each cell.
Our method is typically over 10 times faster than the state-of-the-art (namely, NeRF(Mildenhall et al., 2020)) at inference time while achieving higher quality results.
arXiv Detail & Related papers (2020-07-22T17:51:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.