Active Exploration for Neural Global Illumination of Variable Scenes
- URL: http://arxiv.org/abs/2203.08272v1
- Date: Tue, 15 Mar 2022 21:45:51 GMT
- Title: Active Exploration for Neural Global Illumination of Variable Scenes
- Authors: Stavros Diolatzis and Julien Philip and George Drettakis
- Abstract summary: We introduce a novel Active Exploration method using Markov Chain Monte Carlo.
We apply our approach on a neural generator that learns to render novel scene instances.
Our method allows interactive rendering of hard light transport paths.
- Score: 6.591705508311505
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neural rendering algorithms introduce a fundamentally new approach for
photorealistic rendering, typically by learning a neural representation of
illumination on large numbers of ground truth images. When training for a given
variable scene, i.e., changing objects, materials, lights and viewpoint, the
space D of possible training data instances quickly becomes unmanageable as the
dimensions of variable parameters increase. We introduce a novel Active
Exploration method using Markov Chain Monte Carlo, which explores D, generating
samples (i.e., ground truth renderings) that best help training and interleaves
training and on-the-fly sample data generation. We introduce a self-tuning
sample reuse strategy to minimize the expensive step of rendering training
samples. We apply our approach on a neural generator that learns to render
novel scene instances given an explicit parameterization of the scene
configuration. Our results show that Active Exploration trains our network much
more efficiently than uniformly sampling, and together with our resolution
enhancement approach, achieves better quality than uniform sampling at
convergence. Our method allows interactive rendering of hard light transport
paths (e.g., complex caustics) -- that require very high samples counts to be
captured -- and provides dynamic scene navigation and manipulation, after
training for 5-18 hours depending on required quality and variations.
Related papers
- Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling [70.34875558830241]
We present a way for learning a-temporal (4D) embedding, based on semantic semantic gears to allow for stratified modeling of dynamic regions of rendering the scene.
At the same time, almost for free, our tracking approach enables free-viewpoint of interest - a functionality not yet achieved by existing NeRF-based methods.
arXiv Detail & Related papers (2024-06-06T03:37:39Z) - Fast LiDAR Upsampling using Conditional Diffusion Models [1.3709133749179265]
Existing approaches have shown the possibilities for using diffusion models to generate refined LiDAR data with high fidelity.
We introduce a novel approach based on conditional diffusion models for fast and high-quality sparse-to-dense upsampling of 3D scene point clouds.
Our method employs denoising diffusion probabilistic models trained with conditional inpainting masks, which have been shown to give high performance on image completion tasks.
arXiv Detail & Related papers (2024-05-08T08:38:28Z) - Multi-Level Neural Scene Graphs for Dynamic Urban Environments [64.26401304233843]
We present a novel, decomposable radiance field approach for dynamic urban environments.
We propose a multi-level neural scene graph representation that scales to thousands of images from dozens of sequences with hundreds of fast-moving objects.
arXiv Detail & Related papers (2024-03-29T21:52:01Z) - DNS SLAM: Dense Neural Semantic-Informed SLAM [92.39687553022605]
DNS SLAM is a novel neural RGB-D semantic SLAM approach featuring a hybrid representation.
Our method integrates multi-view geometry constraints with image-based feature extraction to improve appearance details.
Our experimental results achieve state-of-the-art performance on both synthetic data and real-world data tracking.
arXiv Detail & Related papers (2023-11-30T21:34:44Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - Fast Non-Rigid Radiance Fields from Monocularized Data [66.74229489512683]
This paper proposes a new method for full 360deg inward-facing novel view synthesis of non-rigidly deforming scenes.
At the core of our method are 1) An efficient deformation module that decouples the processing of spatial and temporal information for accelerated training and inference; and 2) A static module representing the canonical scene as a fast hash-encoded neural radiance field.
In both cases, our method is significantly faster than previous methods, converging in less than 7 minutes and achieving real-time framerates at 1K resolution, while obtaining a higher visual accuracy for generated novel views.
arXiv Detail & Related papers (2022-12-02T18:51:10Z) - Neural-Sim: Learning to Generate Training Data with NeRF [31.81496344354997]
We present the first fully differentiable synthetic data pipeline that uses Neural Radiance Fields (NeRFs) in a closed-loop with a target application's loss function.
Our approach generates data on-demand, with no human labor, to maximize accuracy for a target task.
arXiv Detail & Related papers (2022-07-22T22:48:33Z) - AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance
Fields [8.214695794896127]
Novel view synthesis has recently been revolutionized by learning neural radiance fields directly from sparse observations.
rendering images with this new paradigm is slow due to the fact that an accurate quadrature of the volume rendering equation requires a large number of samples for each ray.
We propose a novel dual-network architecture that takes an direction by learning how to best reduce the number of required sample points.
arXiv Detail & Related papers (2022-07-21T05:59:13Z) - Neural Point Light Fields [80.98651520818785]
We introduce Neural Point Light Fields that represent scenes implicitly with a light field living on a sparse point cloud.
These point light fields are as a function of the ray direction, and local point feature neighborhood, allowing us to interpolate the light field conditioned training images without dense object coverage and parallax.
arXiv Detail & Related papers (2021-12-02T18:20:10Z) - Photon-Driven Neural Path Guiding [102.12596782286607]
We present a novel neural path guiding approach that can reconstruct high-quality sampling distributions for path guiding from a sparse set of samples.
We leverage photons traced from light sources as the input for sampling density reconstruction, which is highly effective for challenging scenes with strong global illumination.
Our approach achieves significantly better rendering results of testing scenes than previous state-of-the-art path guiding methods.
arXiv Detail & Related papers (2020-10-05T04:54:01Z) - Stillleben: Realistic Scene Synthesis for Deep Learning in Robotics [33.30312206728974]
We describe a synthesis pipeline capable of producing training data for cluttered scene perception tasks.
Our approach arranges object meshes in physically realistic, dense scenes using physics simulation.
Our pipeline can be run online during training of a deep neural network.
arXiv Detail & Related papers (2020-05-12T10:11:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.