Event-based Camera Simulation using Monte Carlo Path Tracing with
Adaptive Denoising
- URL: http://arxiv.org/abs/2303.02608v2
- Date: Tue, 22 Aug 2023 06:19:48 GMT
- Title: Event-based Camera Simulation using Monte Carlo Path Tracing with
Adaptive Denoising
- Authors: Yuta Tsuji, Tatsuya Yatagawa, Hiroyuki Kubo, Shigeo Morishima
- Abstract summary: Event-based video can be viewed as a process of detecting the changes from noisy brightness values.
We extend a denoising method based on a weighted local regression to detect the brightness changes.
- Score: 10.712584582512811
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents an algorithm to obtain an event-based video from noisy
frames given by physics-based Monte Carlo path tracing over a synthetic 3D
scene. Given the nature of dynamic vision sensor (DVS), rendering event-based
video can be viewed as a process of detecting the changes from noisy brightness
values. We extend a denoising method based on a weighted local regression (WLR)
to detect the brightness changes rather than applying denoising to every pixel.
Specifically, we derive a threshold to determine the likelihood of event
occurrence and reduce the number of times to perform the regression. Our method
is robust to noisy video frames obtained from a few path-traced samples.
Despite its efficiency, our method performs comparably to or even better than
an approach that exhaustively denoises every frame.
Related papers
- Temporal As a Plugin: Unsupervised Video Denoising with Pre-Trained Image Denoisers [30.965705043127144]
In this paper, we propose a novel unsupervised video denoising framework, named Temporal As aTAP' (TAP)
By incorporating temporal modules, our method can harness temporal information across noisy frames, complementing its power of spatial denoising.
Compared to other unsupervised video denoising methods, our framework demonstrates superior performance on both sRGB and raw video denoising datasets.
arXiv Detail & Related papers (2024-09-17T15:05:33Z) - FaVoR: Features via Voxel Rendering for Camera Relocalization [23.7893950095252]
Camera relocalization methods range from dense image alignment to direct camera pose regression from a query image.
We propose a novel approach that leverages a globally sparse yet locally dense 3D representation of 2D features.
By tracking and triangulating landmarks over a sequence of frames, we construct a sparse voxel map optimized to render image patch descriptors observed during tracking.
arXiv Detail & Related papers (2024-09-11T18:58:16Z) - AdaDiff: Adaptive Step Selection for Fast Diffusion [88.8198344514677]
We introduce AdaDiff, a framework designed to learn instance-specific step usage policies.
AdaDiff is optimized using a policy gradient method to maximize a carefully designed reward function.
Our approach achieves similar results in terms of visual quality compared to the baseline using a fixed 50 denoising steps.
arXiv Detail & Related papers (2023-11-24T11:20:38Z) - DiffSED: Sound Event Detection with Denoising Diffusion [70.18051526555512]
We reformulate the SED problem by taking a generative learning perspective.
Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process.
During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions.
arXiv Detail & Related papers (2023-08-14T17:29:41Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - RViDeformer: Efficient Raw Video Denoising Transformer with a Larger
Benchmark Dataset [16.131438855407175]
There is no large dataset with realistic motions for supervised raw video denoising.
We construct a video denoising dataset (named as ReCRVD) with 120 groups of noisy-clean videos.
We propose an efficient raw video denoising transformer network (RViDeformer) that explores both short and long-distance correlations.
arXiv Detail & Related papers (2023-05-01T11:06:58Z) - Event-aided Direct Sparse Odometry [54.602311491827805]
We introduce EDS, a direct monocular visual odometry using events and frames.
Our algorithm leverages the event generation model to track the camera motion in the blind time between frames.
EDS is the first method to perform 6-DOF VO using events and frames with a direct approach.
arXiv Detail & Related papers (2022-04-15T20:40:29Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Two-Stage Monte Carlo Denoising with Adaptive Sampling and Kernel Pool [4.194950860992213]
We tackle the problems in Monte Carlo rendering by proposing a two-stage denoiser based on the adaptive sampling strategy.
In the first stage, concurrent to adjusting samples per pixel (spp) on-the-fly, we reuse the computations to generate extra denoising kernels applying on the adaptively rendered image.
In the second stage, we design the position-aware pooling and semantic alignment operators to improve spatial-temporal stability.
arXiv Detail & Related papers (2021-03-30T07:05:55Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.