Continuous Levels of Detail for Light Field Networks
- URL: http://arxiv.org/abs/2309.11591v1
- Date: Wed, 20 Sep 2023 19:02:20 GMT
- Title: Continuous Levels of Detail for Light Field Networks
- Authors: David Li, Brandon Y. Feng, Amitabh Varshney
- Abstract summary: We propose a method to encode light field networks with continuous LODs, allowing for finely tuned adaptations to rendering conditions.
Our training procedure uses summed-area table filtering allowing efficient and continuous filtering at various LODs.
We also use saliency-based importance sampling which enables our light field networks to distribute their capacity, particularly limited at lower LODs.
- Score: 6.94680554206111
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, several approaches have emerged for generating neural
representations with multiple levels of detail (LODs). LODs can improve the
rendering by using lower resolutions and smaller model sizes when appropriate.
However, existing methods generally focus on a few discrete LODs which suffer
from aliasing and flicker artifacts as details are changed and limit their
granularity for adapting to resource limitations. In this paper, we propose a
method to encode light field networks with continuous LODs, allowing for finely
tuned adaptations to rendering conditions. Our training procedure uses
summed-area table filtering allowing efficient and continuous filtering at
various LODs. Furthermore, we use saliency-based importance sampling which
enables our light field networks to distribute their capacity, particularly
limited at lower LODs, towards representing the details viewers are most likely
to focus on. Incorporating continuous LODs into neural representations enables
progressive streaming of neural representations, decreasing the latency and
resource utilization for rendering.
Related papers
- DINTR: Tracking via Diffusion-based Interpolation [12.130669304428565]
This work proposes a novel diffusion-based methodology to formulate the tracking task.
Our INterpolation TrackeR (DINTR) presents a promising new paradigm and achieves a superior multiplicity on seven benchmarks across five indicator representations.
arXiv Detail & Related papers (2024-10-14T00:41:58Z) - Informative Rays Selection for Few-Shot Neural Radiance Fields [0.3599866690398789]
KeyNeRF is a simple yet effective method for training NeRF in few-shot scenarios by focusing on key informative rays.
Our approach performs favorably against state-of-the-art methods, while requiring minimal changes to existing NeRFs.
arXiv Detail & Related papers (2023-12-29T11:08:19Z) - LDM-ISP: Enhancing Neural ISP for Low Light with Latent Diffusion Models [54.93010869546011]
We propose to leverage the pre-trained latent diffusion model to perform the neural ISP for enhancing extremely low-light images.
Specifically, to tailor the pre-trained latent diffusion model to operate on the RAW domain, we train a set of lightweight taming modules.
We observe different roles of UNet denoising and decoder reconstruction in the latent diffusion model, which inspires us to decompose the low-light image enhancement task into latent-space low-frequency content generation and decoding-phase high-frequency detail maintenance.
arXiv Detail & Related papers (2023-12-02T04:31:51Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Adaptive Multi-NeRF: Exploit Efficient Parallelism in Adaptive Multiple
Scale Neural Radiance Field Rendering [3.8200916793910973]
Recent advances in Neural Radiance Fields (NeRF) have demonstrated significant potential for representing 3D scene appearances as implicit neural networks.
However, the lengthy training and rendering process hinders the widespread adoption of this promising technique for real-time rendering applications.
We present an effective adaptive multi-NeRF method designed to accelerate the neural rendering process for large scenes.
arXiv Detail & Related papers (2023-10-03T08:34:49Z) - Grid-guided Neural Radiance Fields for Large Urban Scenes [146.06368329445857]
Recent approaches propose to geographically divide the scene and adopt multiple sub-NeRFs to model each region individually.
An alternative solution is to use a feature grid representation, which is computationally efficient and can naturally scale to a large scene.
We present a new framework that realizes high-fidelity rendering on large urban scenes while being computationally efficient.
arXiv Detail & Related papers (2023-03-24T13:56:45Z) - Deep Dynamic Scene Deblurring from Optical Flow [53.625999196063574]
Deblurring can provide visually more pleasant pictures and make photography more convenient.
It is difficult to model the non-uniform blur mathematically.
We develop a convolutional neural network (CNN) to restore the sharp images from the deblurred features.
arXiv Detail & Related papers (2023-01-18T06:37:21Z) - Progressive Multi-scale Light Field Networks [14.050802766699084]
We present a progressive multi-scale light field network that encodes a light field with multiple levels of detail.
Lower levels of detail are encoded using fewer neural network weights enabling progressive streaming and reducing rendering time.
arXiv Detail & Related papers (2022-08-13T19:02:34Z) - InfoNeRF: Ray Entropy Minimization for Few-Shot Neural Volume Rendering [55.70938412352287]
We present an information-theoretic regularization technique for few-shot novel view synthesis based on neural implicit representation.
The proposed approach minimizes potential reconstruction inconsistency that happens due to insufficient viewpoints.
We achieve consistently improved performance compared to existing neural view synthesis methods by large margins on multiple standard benchmarks.
arXiv Detail & Related papers (2021-12-31T11:56:01Z) - Interpretable Detail-Fidelity Attention Network for Single Image
Super-Resolution [89.1947690981471]
We propose a purposeful and interpretable detail-fidelity attention network to progressively process smoothes and details in divide-and-conquer manner.
Particularly, we propose a Hessian filtering for interpretable feature representation which is high-profile for detail inference.
Experiments demonstrate that the proposed methods achieve superior performances over the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-28T08:31:23Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.