Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering
- URL: http://arxiv.org/abs/2106.02634v1
- Date: Fri, 4 Jun 2021 17:54:49 GMT
- Title: Light Field Networks: Neural Scene Representations with
Single-Evaluation Rendering
- Authors: Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B.
Tenenbaum, Fredo Durand
- Abstract summary: Inferring representations of 3D scenes from 2D observations is a fundamental problem of computer graphics, computer vision, and artificial intelligence.
We propose a novel neural scene representation, Light Field Networks or LFNs, which represent both geometry and appearance of the underlying 3D scene in a 360-degree, four-dimensional light field.
Rendering a ray from an LFN requires only a *single* network evaluation, as opposed to hundreds of evaluations per ray for ray-marching or based on volumetrics.
- Score: 60.02806355570514
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Inferring representations of 3D scenes from 2D observations is a fundamental
problem of computer graphics, computer vision, and artificial intelligence.
Emerging 3D-structured neural scene representations are a promising approach to
3D scene understanding. In this work, we propose a novel neural scene
representation, Light Field Networks or LFNs, which represent both geometry and
appearance of the underlying 3D scene in a 360-degree, four-dimensional light
field parameterized via a neural implicit representation. Rendering a ray from
an LFN requires only a *single* network evaluation, as opposed to hundreds of
evaluations per ray for ray-marching or volumetric based renderers in
3D-structured neural scene representations. In the setting of simple scenes, we
leverage meta-learning to learn a prior over LFNs that enables multi-view
consistent light field reconstruction from as little as a single image
observation. This results in dramatic reductions in time and memory complexity,
and enables real-time rendering. The cost of storing a 360-degree light field
via an LFN is two orders of magnitude lower than conventional methods such as
the Lumigraph. Utilizing the analytical differentiability of neural implicit
representations and a novel parameterization of light space, we further
demonstrate the extraction of sparse depth maps from LFNs.
Related papers
- N-BVH: Neural ray queries with bounding volume hierarchies [51.430495562430565]
In 3D computer graphics, the bulk of a scene's memory usage is due to polygons and textures.
We devise N-BVH, a neural compression architecture designed to answer arbitrary ray queries in 3D.
Our method provides faithful approximations of visibility, depth, and appearance attributes.
arXiv Detail & Related papers (2024-05-25T13:54:34Z) - NeRFMeshing: Distilling Neural Radiance Fields into
Geometrically-Accurate 3D Meshes [56.31855837632735]
We propose a compact and flexible architecture that enables easy 3D surface reconstruction from any NeRF-driven approach.
Our final 3D mesh is physically accurate and can be rendered in real time on an array of devices.
arXiv Detail & Related papers (2023-03-16T16:06:03Z) - Multi-Plane Neural Radiance Fields for Novel View Synthesis [5.478764356647437]
Novel view synthesis is a long-standing problem that revolves around rendering frames of scenes from novel camera viewpoints.
In this work, we examine the performance, generalization, and efficiency of single-view multi-plane neural radiance fields.
We propose a new multiplane NeRF architecture that accepts multiple views to improve the synthesis results and expand the viewing range.
arXiv Detail & Related papers (2023-03-03T06:32:55Z) - S$^3$-NeRF: Neural Reflectance Field from Shading and Shadow under a
Single Viewpoint [22.42916940712357]
Our method learns a neural reflectance field to represent the 3D geometry and BRDFs of a scene.
Our method is capable of recovering 3D geometry, including both visible and invisible parts, of a scene from single-view images.
It supports applications like novel-view synthesis and relighting.
arXiv Detail & Related papers (2022-10-17T11:01:52Z) - Learning Generalizable Light Field Networks from Few Images [7.672380267651058]
We present a new strategy for few-shot novel view synthesis based on a neural light field representation.
We show that our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition.
arXiv Detail & Related papers (2022-07-24T14:47:11Z) - Neural Groundplans: Persistent Neural Scene Representations from a
Single Image [90.04272671464238]
We present a method to map 2D image observations of a scene to a persistent 3D scene representation.
We propose conditional neural groundplans as persistent and memory-efficient scene representations.
arXiv Detail & Related papers (2022-07-22T17:41:24Z) - Vision Transformer for NeRF-Based View Synthesis from a Single Input
Image [49.956005709863355]
We propose to leverage both the global and local features to form an expressive 3D representation.
To synthesize a novel view, we train a multilayer perceptron (MLP) network conditioned on the learned 3D representation to perform volume rendering.
Our method can render novel views from only a single input image and generalize across multiple object categories using a single model.
arXiv Detail & Related papers (2022-07-12T17:52:04Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.