Progressive Multi-scale Light Field Networks
- URL: http://arxiv.org/abs/2208.06710v1
- Date: Sat, 13 Aug 2022 19:02:34 GMT
- Title: Progressive Multi-scale Light Field Networks
- Authors: David Li, Amitabh Varshney
- Abstract summary: We present a progressive multi-scale light field network that encodes a light field with multiple levels of detail.
Lower levels of detail are encoded using fewer neural network weights enabling progressive streaming and reducing rendering time.
- Score: 14.050802766699084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural representations have shown great promise in their ability to represent
radiance and light fields while being very compact compared to the image set
representation. However, current representations are not well suited for
streaming as decoding can only be done at a single level of detail and requires
downloading the entire neural network model. Furthermore, high-resolution light
field networks can exhibit flickering and aliasing as neural networks are
sampled without appropriate filtering. To resolve these issues, we present a
progressive multi-scale light field network that encodes a light field with
multiple levels of detail. Lower levels of detail are encoded using fewer
neural network weights enabling progressive streaming and reducing rendering
time. Our progressive multi-scale light field network addresses aliasing by
encoding smaller anti-aliased representations at its lower levels of detail.
Additionally, per-pixel level of detail enables our representation to support
dithered transitions and foveated rendering.
Related papers
- NeRF-Casting: Improved View-Dependent Appearance with Consistent Reflections [57.63028964831785]
Recent works have improved NeRF's ability to render detailed specular appearance of distant environment illumination, but are unable to synthesize consistent reflections of closer content.
We address these issues with an approach based on ray tracing.
Instead of querying an expensive neural network for the outgoing view-dependent radiance at points along each camera ray, our model casts rays from these points and traces them through the NeRF representation to render feature vectors.
arXiv Detail & Related papers (2024-05-23T17:59:57Z) - You Only Need One Color Space: An Efficient Network for Low-light Image Enhancement [50.37253008333166]
Low-Light Image Enhancement (LLIE) task tends to restore the details and visual information from corrupted low-light images.
We propose a novel trainable color space, named Horizontal/Vertical-Intensity (HVI)
It not only decouples brightness and color from RGB channels to mitigate the instability during enhancement but also adapts to low-light images in different illumination ranges due to the trainable parameters.
arXiv Detail & Related papers (2024-02-08T16:47:43Z) - Continuous Levels of Detail for Light Field Networks [6.94680554206111]
We propose a method to encode light field networks with continuous LODs, allowing for finely tuned adaptations to rendering conditions.
Our training procedure uses summed-area table filtering allowing efficient and continuous filtering at various LODs.
We also use saliency-based importance sampling which enables our light field networks to distribute their capacity, particularly limited at lower LODs.
arXiv Detail & Related papers (2023-09-20T19:02:20Z) - Efficient View Synthesis with Neural Radiance Distribution Field [61.22920276806721]
We propose a new representation called Neural Radiance Distribution Field (NeRDF) that targets efficient view synthesis in real-time.
We use a small network similar to NeRF while preserving the rendering speed with a single network forwarding per pixel as in NeLF.
Experiments show that our proposed method offers a better trade-off among speed, quality, and network size than existing methods.
arXiv Detail & Related papers (2023-08-22T02:23:28Z) - Volume Feature Rendering for Fast Neural Radiance Field Reconstruction [11.05302598034426]
Neural radiance fields (NeRFs) are able to synthesize realistic novel views from multi-view images captured from distinct positions and perspectives.
In NeRF's rendering pipeline, neural networks are used to represent a scene independently or transform queried learnable feature vector of a point to the expected color or density.
We propose to render the queried feature vectors of a ray first and then transform the rendered feature vector to the final pixel color by a neural network.
arXiv Detail & Related papers (2023-05-29T06:58:27Z) - Learning Generalizable Light Field Networks from Few Images [7.672380267651058]
We present a new strategy for few-shot novel view synthesis based on a neural light field representation.
We show that our method achieves competitive performances on synthetic and real MVS data with respect to state-of-the-art neural radiance field based competition.
arXiv Detail & Related papers (2022-07-24T14:47:11Z) - Progressively-connected Light Field Network for Efficient View Synthesis [69.29043048775802]
We present a Progressively-connected Light Field network (ProLiF) for the novel view synthesis of complex forward-facing scenes.
ProLiF encodes a 4D light field, which allows rendering a large batch of rays in one training step for image- or patch-level losses.
arXiv Detail & Related papers (2022-07-10T13:47:20Z) - PINs: Progressive Implicit Networks for Multi-Scale Neural
Representations [68.73195473089324]
We propose a progressive positional encoding, exposing a hierarchical structure to incremental sets of frequency encodings.
Our model accurately reconstructs scenes with wide frequency bands and learns a scene representation at progressive level of detail.
Experiments on several 2D and 3D datasets show improvements in reconstruction accuracy, representational capacity and training speed compared to baselines.
arXiv Detail & Related papers (2022-02-09T20:33:37Z) - Learning Neural Light Fields with Ray-Space Embedding Networks [51.88457861982689]
We propose a novel neural light field representation that is compact and directly predicts integrated radiance along rays.
Our method achieves state-of-the-art quality on dense forward-facing datasets such as the Stanford Light Field dataset.
arXiv Detail & Related papers (2021-12-02T18:59:51Z) - Learning light field synthesis with Multi-Plane Images: scene encoding
as a recurrent segmentation task [30.058283056074426]
This paper addresses the problem of view synthesis from large baseline light fields by turning a sparse set of input views into a Multi-plane Image (MPI)
Because available datasets are scarce, we propose a lightweight network that does not require extensive training.
Our model does not learn to estimate RGB layers but only encodes the scene geometry within MPI alpha layers, which comes down to a segmentation task.
arXiv Detail & Related papers (2020-02-12T14:35:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.