Two-Stage Monte Carlo Denoising with Adaptive Sampling and Kernel Pool
- URL: http://arxiv.org/abs/2103.16115v1
- Date: Tue, 30 Mar 2021 07:05:55 GMT
- Title: Two-Stage Monte Carlo Denoising with Adaptive Sampling and Kernel Pool
- Authors: Tiange Xiang, Hongliang Yuan, Haozhi Huang, Yujin Shi
- Abstract summary: We tackle the problems in Monte Carlo rendering by proposing a two-stage denoiser based on the adaptive sampling strategy.
In the first stage, concurrent to adjusting samples per pixel (spp) on-the-fly, we reuse the computations to generate extra denoising kernels applying on the adaptively rendered image.
In the second stage, we design the position-aware pooling and semantic alignment operators to improve spatial-temporal stability.
- Score: 4.194950860992213
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Monte Carlo path tracer renders noisy image sequences at low sampling counts.
Although great progress has been made on denoising such sequences, existing
methods still suffer from spatial and temporary artifacts. In this paper, we
tackle the problems in Monte Carlo rendering by proposing a two-stage denoiser
based on the adaptive sampling strategy. In the first stage, concurrent to
adjusting samples per pixel (spp) on-the-fly, we reuse the computations to
generate extra denoising kernels applying on the adaptively rendered image.
Rather than a direct prediction of pixel-wise kernels, we save the overhead
complexity by interpolating such kernels from a public kernel pool, which can
be dynamically updated to fit input signals. In the second stage, we design the
position-aware pooling and semantic alignment operators to improve
spatial-temporal stability. Our method was first benchmarked on 10 synthesized
scenes rendered from the Mitsuba renderer and then validated on 3 additional
scenes rendered from our self-built RTX-based renderer. Our method outperforms
state-of-the-art counterparts in terms of both numerical error and visual
quality.
Related papers
- GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views [67.34073368933814]
We propose a generalizable Gaussian Splatting approach for high-resolution image rendering under a sparse-view camera setting.
We train our Gaussian parameter regression module on human-only data or human-scene data, jointly with a depth estimation module to lift 2D parameter maps to 3D space.
Experiments on several datasets demonstrate that our method outperforms state-of-the-art methods while achieving an exceeding rendering speed.
arXiv Detail & Related papers (2024-11-18T08:18:44Z) - bit2bit: 1-bit quanta video reconstruction via self-supervised photon prediction [57.199618102578576]
We propose bit2bit, a new method for reconstructing high-quality image stacks at original resolution from sparse binary quantatemporal image data.
Inspired by recent work on Poisson denoising, we developed an algorithm that creates a dense image sequence from sparse binary photon data.
We present a novel dataset containing a wide range of real SPAD high-speed videos under various challenging imaging conditions.
arXiv Detail & Related papers (2024-10-30T17:30:35Z) - VastGaussian: Vast 3D Gaussians for Large Scene Reconstruction [59.40711222096875]
We present VastGaussian, the first method for high-quality reconstruction and real-time rendering on large scenes based on 3D Gaussian Splatting.
Our approach outperforms existing NeRF-based methods and achieves state-of-the-art results on multiple large scene datasets.
arXiv Detail & Related papers (2024-02-27T11:40:50Z) - GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis [70.24111297192057]
We present a new approach, termed GPS-Gaussian, for synthesizing novel views of a character in a real-time manner.
The proposed method enables 2K-resolution rendering under a sparse-view camera setting.
arXiv Detail & Related papers (2023-12-04T18:59:55Z) - RL-based Stateful Neural Adaptive Sampling and Denoising for Real-Time
Path Tracing [1.534667887016089]
MonteCarlo path tracing is a powerful technique for realistic image synthesis but suffers from high levels of noise at low sample counts.
We propose a framework with end-to-end training of a sampling importance network, a latent space encoder network, and a denoiser network.
arXiv Detail & Related papers (2023-10-05T12:39:27Z) - HiFA: High-fidelity Text-to-3D Generation with Advanced Diffusion
Guidance [19.252300247300145]
This work proposes holistic sampling and smoothing approaches to achieve high-quality text-to-3D generation.
We compute denoising scores in the text-to-image diffusion model's latent and image spaces.
To generate high-quality renderings in a single-stage optimization, we propose regularization for the variance of z-coordinates along NeRF rays.
arXiv Detail & Related papers (2023-05-30T05:56:58Z) - Event-based Camera Simulation using Monte Carlo Path Tracing with
Adaptive Denoising [10.712584582512811]
Event-based video can be viewed as a process of detecting the changes from noisy brightness values.
We extend a denoising method based on a weighted local regression to detect the brightness changes.
arXiv Detail & Related papers (2023-03-05T08:44:01Z) - Shape, Light & Material Decomposition from Images using Monte Carlo
Rendering and Denoising [0.7366405857677225]
We show that a more realistic shading model, incorporating ray tracing and Monte Carlo integration, substantially improves decomposition into shape, materials & lighting.
We incorporate multiple importance sampling and denoising in a novel inverse rendering pipeline.
This substantially improves convergence and enables gradient-based optimization at low sample counts.
arXiv Detail & Related papers (2022-06-07T15:19:18Z) - Differentiable Point-Based Radiance Fields for Efficient View Synthesis [57.56579501055479]
We propose a differentiable rendering algorithm for efficient novel view synthesis.
Our method is up to 300x faster than NeRF in both training and inference.
For dynamic scenes, our method trains two orders of magnitude faster than STNeRF and renders at near interactive rate.
arXiv Detail & Related papers (2022-05-28T04:36:13Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.