Neural Scene Chronology
- URL: http://arxiv.org/abs/2306.07970v1
- Date: Tue, 13 Jun 2023 17:59:58 GMT
- Title: Neural Scene Chronology
- Authors: Haotong Lin, Qianqian Wang, Ruojin Cai, Sida Peng, Hadar
Averbuch-Elor, Xiaowei Zhou, Noah Snavely
- Abstract summary: We aim to reconstruct a time-varying 3D model, capable of rendering photo-realistic renderings with independent control of viewpoint, illumination, and time.
In this work, we represent the scene as a space-time radiance field with a per-image illumination embedding, where temporally-varying scene changes are encoded using a set of learned step functions.
- Score: 79.51094408119148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we aim to reconstruct a time-varying 3D model, capable of
rendering photo-realistic renderings with independent control of viewpoint,
illumination, and time, from Internet photos of large-scale landmarks. The core
challenges are twofold. First, different types of temporal changes, such as
illumination and changes to the underlying scene itself (such as replacing one
graffiti artwork with another) are entangled together in the imagery. Second,
scene-level temporal changes are often discrete and sporadic over time, rather
than continuous. To tackle these problems, we propose a new scene
representation equipped with a novel temporal step function encoding method
that can model discrete scene-level content changes as piece-wise constant
functions over time. Specifically, we represent the scene as a space-time
radiance field with a per-image illumination embedding, where
temporally-varying scene changes are encoded using a set of learned step
functions. To facilitate our task of chronology reconstruction from Internet
imagery, we also collect a new dataset of four scenes that exhibit various
changes over time. We demonstrate that our method exhibits state-of-the-art
view synthesis results on this dataset, while achieving independent control of
viewpoint, time, and illumination.
Related papers
- LANe: Lighting-Aware Neural Fields for Compositional Scene Synthesis [65.20672798704128]
We present Lighting-Aware Neural Field (LANe) for compositional synthesis of driving scenes.
We learn a scene representation that disentangles the static background and transient elements into a world-NeRF and class-specific object-NeRFs.
We demonstrate the performance of our model on a synthetic dataset of diverse lighting conditions rendered with the CARLA simulator.
arXiv Detail & Related papers (2023-04-06T17:59:25Z) - Neural Radiance Transfer Fields for Relightable Novel-view Synthesis
with Global Illumination [63.992213016011235]
We propose a method for scene relighting under novel views by learning a neural precomputed radiance transfer function.
Our method can be solely supervised on a set of real images of the scene under a single unknown lighting condition.
Results show that the recovered disentanglement of scene parameters improves significantly over the current state of the art.
arXiv Detail & Related papers (2022-07-27T16:07:48Z) - Scene Representation Transformer: Geometry-Free Novel View Synthesis
Through Set-Latent Scene Representations [48.05445941939446]
A classical problem in computer vision is to infer a 3D scene representation from few images that can be used to render novel views at interactive rates.
We propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area.
We show that this method outperforms recent baselines in terms of PSNR and speed on synthetic datasets.
arXiv Detail & Related papers (2021-11-25T16:18:56Z) - Unconstrained Scene Generation with Locally Conditioned Radiance Fields [24.036609880683585]
We introduce Generative Scene Networks (GSN), which learns to decompose scenes into a collection of local radiance fields.
Our model can be used as a prior to generate new scenes, or to complete a scene given only sparse 2D observations.
arXiv Detail & Related papers (2021-04-01T17:58:26Z) - Learning to Factorize and Relight a City [70.81496092672421]
We propose a learning-based framework for disentangling outdoor scenes into temporally-varying illumination and permanent scene factors.
We show that our learned disentangled factors can be used to manipulate novel images in realistic ways, such as changing lighting effects and scene geometry.
arXiv Detail & Related papers (2020-08-06T17:59:54Z) - Crowdsampling the Plenoptic Function [56.10020793913216]
We present a new approach to novel view synthesis under time-varying illumination from such data.
We introduce a new DeepMPI representation, motivated by observations on the sparsity structure of the plenoptic function.
Our method can synthesize the same compelling parallax and view-dependent effects as previous MPI methods, while simultaneously interpolating along changes in reflectance and illumination with time.
arXiv Detail & Related papers (2020-07-30T02:52:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.