NeLF: Practical Novel View Synthesis with Neural Light Field
- URL: http://arxiv.org/abs/2105.07112v1
- Date: Sat, 15 May 2021 01:20:30 GMT
- Title: NeLF: Practical Novel View Synthesis with Neural Light Field
- Authors: Celong Liu, Zhong Li, Junsong Yuan, Yi Xu
- Abstract summary: We present a practical and robust deep learning solution for the novel view synthesis of complex scenes.
In our approach, a continuous scene is represented as a light field, i.e., a set of rays, each of which has a corresponding color.
Our method achieves state-of-the-art novel view synthesis results while maintaining an interactive frame rate.
- Score: 93.41020940730915
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we present a practical and robust deep learning solution for
the novel view synthesis of complex scenes. In our approach, a continuous scene
is represented as a light field, i.e., a set of rays, each of which has a
corresponding color. We adopt a 4D parameterization of the light field. We then
formulate the light field as a 4D function that maps 4D coordinates to
corresponding color values. We train a deep fully connected network to optimize
this function. Then, the scene-specific model is used to synthesize novel
views. Previous light field approaches usually require dense view sampling to
reliably render high-quality novel views. Our method can render novel views by
sampling rays and querying the color for each ray from the network directly;
thus enabling fast light field rendering with a very sparse set of input
images. Our method achieves state-of-the-art novel view synthesis results while
maintaining an interactive frame rate.
Related papers
- Sampling for View Synthesis: From Local Light Field Fusion to Neural Radiance Fields and Beyond [27.339452004523082]
Local light field fusion proposes an algorithm for practical view synthesis from an irregular grid of sampled views.
We achieve the perceptual quality of Nyquist rate view sampling while using up to 4000x fewer views.
We reprise some of the recent results on sparse and even single image view synthesis.
arXiv Detail & Related papers (2024-08-08T16:56:03Z) - Relit-NeuLF: Efficient Relighting and Novel View Synthesis via Neural 4D
Light Field [69.90548694719683]
We propose an analysis-synthesis approach called Relit-NeuLF.
We first parameterize each ray in a 4D coordinate system, enabling efficient learning and inference.
Comprehensive experiments demonstrate that the proposed method is efficient and effective on both synthetic data and real-world human face data.
arXiv Detail & Related papers (2023-10-23T07:29:51Z) - Generalizable Patch-Based Neural Rendering [46.41746536545268]
We propose a new paradigm for learning models that can synthesize novel views of unseen scenes.
Our method is capable of predicting the color of a target ray in a novel scene directly, just from a collection of patches sampled from the scene.
We show that our approach outperforms the state-of-the-art on novel view synthesis of unseen scenes even when being trained with considerably less data than prior work.
arXiv Detail & Related papers (2022-07-21T17:57:04Z) - Scene Representation Transformer: Geometry-Free Novel View Synthesis
Through Set-Latent Scene Representations [48.05445941939446]
A classical problem in computer vision is to infer a 3D scene representation from few images that can be used to render novel views at interactive rates.
We propose the Scene Representation Transformer (SRT), a method which processes posed or unposed RGB images of a new area.
We show that this method outperforms recent baselines in terms of PSNR and speed on synthetic datasets.
arXiv Detail & Related papers (2021-11-25T16:18:56Z) - IBRNet: Learning Multi-View Image-Based Rendering [67.15887251196894]
We present a method that synthesizes novel views of complex scenes by interpolating a sparse set of nearby views.
By drawing on source views at render time, our method hearkens back to classic work on image-based rendering.
arXiv Detail & Related papers (2021-02-25T18:56:21Z) - Non-Rigid Neural Radiance Fields: Reconstruction and Novel View
Synthesis of a Dynamic Scene From Monocular Video [76.19076002661157]
Non-Rigid Neural Radiance Fields (NR-NeRF) is a reconstruction and novel view synthesis approach for general non-rigid dynamic scenes.
We show that even a single consumer-grade camera is sufficient to synthesize sophisticated renderings of a dynamic scene from novel virtual camera views.
arXiv Detail & Related papers (2020-12-22T18:46:12Z) - Free View Synthesis [100.86844680362196]
We present a method for novel view synthesis from input images that are freely distributed around a scene.
Our method does not rely on a regular arrangement of input views, can synthesize images for free camera movement through the scene, and works for general scenes with unconstrained geometric layouts.
arXiv Detail & Related papers (2020-08-12T18:16:08Z) - NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis [78.5281048849446]
We present a method that achieves state-of-the-art results for synthesizing novel views of complex scenes.
Our algorithm represents a scene using a fully-connected (non-convolutional) deep network.
Because volume rendering is naturally differentiable, the only input required to optimize our representation is a set of images with known camera poses.
arXiv Detail & Related papers (2020-03-19T17:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.