Real-time Neural Radiance Caching for Path Tracing
- URL: http://arxiv.org/abs/2106.12372v2
- Date: Fri, 25 Jun 2021 08:09:48 GMT
- Title: Real-time Neural Radiance Caching for Path Tracing
- Authors: Thomas M\"uller, Fabrice Rousselle, Jan Nov\'ak, Alexander Keller
- Abstract summary: We present a real-time neural radiance caching method for path-traced global illumination.
Our system is designed to handle fully dynamic scenes, and makes no assumptions about the lighting, geometry, and materials.
We demonstrate significant noise reduction at the cost of little induced bias, and report state-of-the-art, real-time performance on a number of challenging scenarios.
- Score: 67.46991813306708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a real-time neural radiance caching method for path-traced global
illumination. Our system is designed to handle fully dynamic scenes, and makes
no assumptions about the lighting, geometry, and materials. The data-driven
nature of our approach sidesteps many difficulties of caching algorithms, such
as locating, interpolating, and updating cache points. Since pretraining neural
networks to handle novel, dynamic scenes is a formidable generalization
challenge, we do away with pretraining and instead achieve generalization via
adaptation, i.e. we opt for training the radiance cache while rendering. We
employ self-training to provide low-noise training targets and simulate
infinite-bounce transport by merely iterating few-bounce training updates. The
updates and cache queries incur a mild overhead -- about 2.6ms on full HD
resolution -- thanks to a streaming implementation of the neural network that
fully exploits modern hardware. We demonstrate significant noise reduction at
the cost of little induced bias, and report state-of-the-art, real-time
performance on a number of challenging scenarios.
Related papers
- Flash Cache: Reducing Bias in Radiance Cache Based Inverse Rendering [62.92985004295714]
We present a method that avoids approximations that introduce bias into the renderings and, more importantly, the gradients used for optimization.
We show that by removing these biases our approach improves the generality of radiance cache based inverse rendering, as well as increasing quality in the presence of challenging light transport effects such as specular reflections.
arXiv Detail & Related papers (2024-09-09T17:59:57Z) - D-NPC: Dynamic Neural Point Clouds for Non-Rigid View Synthesis from Monocular Video [53.83936023443193]
This paper contributes to the field by introducing a new synthesis method for dynamic novel view from monocular video, such as smartphone captures.
Our approach represents the as a $textitdynamic neural point cloud$, an implicit time-conditioned point cloud that encodes local geometry and appearance in separate hash-encoded neural feature grids.
arXiv Detail & Related papers (2024-06-14T14:35:44Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Efficient Meta-Tuning for Content-aware Neural Video Delivery [40.3731358963689]
We present Efficient Meta-Tuning (EMT) to reduce the computational cost.
EMT adapts a meta-learned model to the first chunk of the input video.
We propose a novel sampling strategy to extract the most challenging patches from video frames.
arXiv Detail & Related papers (2022-07-20T06:47:10Z) - UNeRF: Time and Memory Conscious U-Shaped Network for Training Neural
Radiance Fields [16.826691448973367]
Neural Radiance Fields (NeRFs) increase reconstruction detail for novel view synthesis and scene reconstruction.
However, the increased resolution and model-free nature of such neural fields come at the cost of high training times and excessive memory requirements.
We propose a method to exploit the redundancy of NeRF's sample-based computations by partially sharing evaluations across neighboring sample points.
arXiv Detail & Related papers (2022-06-23T19:57:07Z) - Physics Informed Neural Fields for Smoke Reconstruction with Sparse Data [73.8970871148949]
High-fidelity reconstruction of fluids from sparse multiview RGB videos remains a formidable challenge.
Existing solutions either assume knowledge of obstacles and lighting, or only focus on simple fluid scenes without obstacles or complex lighting.
We present the first method to reconstruct dynamic fluid by leveraging the governing physics (ie, Navier -Stokes equations) in an end-to-end optimization.
arXiv Detail & Related papers (2022-06-14T03:38:08Z) - Fast Dynamic Radiance Fields with Time-Aware Neural Voxels [106.69049089979433]
We propose a radiance field framework by representing scenes with time-aware voxel features, named as TiNeuVox.
Our framework accelerates the optimization of dynamic radiance fields while maintaining high rendering quality.
Our TiNeuVox completes training with only 8 minutes and 8-MB storage cost while showing similar or even better rendering performance than previous dynamic NeRF methods.
arXiv Detail & Related papers (2022-05-30T17:47:31Z) - Active Exploration for Neural Global Illumination of Variable Scenes [6.591705508311505]
We introduce a novel Active Exploration method using Markov Chain Monte Carlo.
We apply our approach on a neural generator that learns to render novel scene instances.
Our method allows interactive rendering of hard light transport paths.
arXiv Detail & Related papers (2022-03-15T21:45:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.