Learning Neural Light Fields with Ray-Space Embedding Networks
- URL: http://arxiv.org/abs/2112.01523v2
- Date: Mon, 6 Dec 2021 17:45:14 GMT
- Title: Learning Neural Light Fields with Ray-Space Embedding Networks
- Authors: Benjamin Attal, Jia-Bin Huang, Michael Zollhoefer, Johannes Kopf,
Changil Kim
- Abstract summary: We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
- Score: 51.88457861982689
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural radiance fields (NeRFs) produce state-of-the-art view synthesis
results. However, they are slow to render, requiring hundreds of network
evaluations per pixel to approximate a volume rendering integral. Baking NeRFs
into explicit data structures enables efficient rendering, but results in a
large increase in memory footprint and, in many cases, a quality reduction. In
this paper, we propose a novel neural light field representation that, in
contrast, is compact and directly predicts integrated radiance along rays. Our
method supports rendering with a single network evaluation per pixel for small
baseline light field datasets and can also be applied to larger baselines with
only a few evaluations per pixel. At the core of our approach is a ray-space
embedding network that maps the 4D ray-space manifold into an intermediate,
interpolable latent space. Our method achieves state-of-the-art quality on
dense forward-facing datasets such as the Stanford Light Field dataset. In
addition, for forward-facing scenes with sparser inputs we achieve results that
are competitive with NeRF-based approaches in terms of quality while providing
a better speed/quality/memory trade-off with far fewer network evaluations.
Related papers
- Efficient Ray Sampling for Radiance Fields Reconstruction [4.004168836949491]
ray sampling strategy profoundly impacts network convergence.
We propose a novel ray sampling approach for neural radiance fields.
Our method significantly outperforms state-of-the-art techniques on public benchmark datasets.
arXiv Detail & Related papers (2023-08-29T18:11:32Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Multi-Space Neural Radiance Fields [74.46513422075438]
Existing Neural Radiance Fields (NeRF) methods suffer from the existence of reflective objects.
We propose a multi-space neural radiance field (MS-NeRF) that represents the scene using a group of feature fields in parallel sub-spaces.
Our approach significantly outperforms the existing single-space NeRF methods for rendering high-quality scenes.
arXiv Detail & Related papers (2023-05-07T13:11:07Z) - Learning Neural Duplex Radiance Fields for Real-Time View Synthesis [33.54507228895688]
We propose a novel approach to distill and bake NeRFs into highly efficient mesh-based neural representations.
We demonstrate the effectiveness and superiority of our approach via extensive experiments on a range of standard datasets.
arXiv Detail & Related papers (2023-04-20T17:59:52Z) - SPARF: Large-Scale Learning of 3D Sparse Radiance Fields from Few Input
Images [62.64942825962934]
We present SPARF, a large-scale ShapeNet-based synthetic dataset for novel view synthesis.
We propose a novel pipeline (SuRFNet) that learns to generate sparse voxel radiance fields from only few views.
SuRFNet employs partial SRFs from few/one images and a specialized SRF loss to learn to generate high-quality sparse voxel radiance fields.
arXiv Detail & Related papers (2022-12-18T14:56:22Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis [104.53930611219654]
We present a large-scale synthetic dataset for novel view synthesis consisting of 300k images rendered from nearly 2000 complex scenes.
The dataset is orders of magnitude larger than existing synthetic datasets for novel view synthesis.
Using 4 distinct sources of high-quality 3D meshes, the scenes of our dataset exhibit challenging variations in camera views, lighting, shape, materials, and textures.
arXiv Detail & Related papers (2022-05-14T13:15:32Z) - MVSNeRF: Fast Generalizable Radiance Field Reconstruction from
Multi-View Stereo [52.329580781898116]
We present MVSNeRF, a novel neural rendering approach that can efficiently reconstruct neural radiance fields for view synthesis.
Unlike prior works on neural radiance fields that consider per-scene optimization on densely captured images, we propose a generic deep neural network that can reconstruct radiance fields from only three nearby input views via fast network inference.
arXiv Detail & Related papers (2021-03-29T13:15:23Z) - DONeRF: Towards Real-Time Rendering of Neural Radiance Fields using
Depth Oracle Networks [6.2444658061424665]
DONeRF is a dual network design with a depth oracle network as a first step and a locally sampled shading network for ray accumulation.
We are the first to render raymarching-based neural representations at interactive frame rates (15 frames per second at 800x800) on a single GPU.
arXiv Detail & Related papers (2021-03-04T18:55:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.