Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras
- URL: http://arxiv.org/abs/2404.01298v2
- Date: Thu, 05 Dec 2024 19:21:33 GMT
- Title: Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras
- Authors: Ruiming Cao, Dekel Galor, Amit Kohli, Jacob L Yates, Laura Waller,
- Abstract summary: Event cameras capture changes of log-intensity over time as a stream of 'events'
fluctuations due to random photon arrival inevitably trigger noise events, even for static scenes.
We propose Noise2Image to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene.
- Score: 2.630755581216441
- License:
- Abstract: Event cameras, also known as dynamic vision sensors, are an emerging modality for measuring fast dynamics asynchronously. Event cameras capture changes of log-intensity over time as a stream of 'events' and generally cannot measure intensity itself; hence, they are only used for imaging dynamic scenes. However, fluctuations due to random photon arrival inevitably trigger noise events, even for static scenes. While previous efforts have been focused on filtering out these undesirable noise events to improve signal quality, we find that, in the photon-noise regime, these noise events are correlated with the static scene intensity. We analyze the noise event generation and model its relationship to illuminance. Based on this understanding, we propose a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene, which are otherwise invisible to event cameras. We experimentally collect a dataset of noise events on static scenes to train and validate Noise2Image. Our results provide a novel approach for capturing static scenes in event cameras, solely from noise events, without additional hardware.
Related papers
- A Label-Free and Non-Monotonic Metric for Evaluating Denoising in Event Cameras [17.559229117246666]
Event cameras are renowned for their high efficiency due to outputting a sparse, asynchronous stream of events.
Denoising is an essential task for event cameras, but evaluating denoising performance is challenging.
We propose the first label-free and non-monotonic evaluation metric, the area of the continuous contrast curve (AOCC)
arXiv Detail & Related papers (2024-06-13T08:12:48Z) - LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising [19.51468512911655]
Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference.
We construct a new paired real-world event denoising dataset (LED), including 3K sequences with 18K seconds of high-resolution (1200*680) event streams.
We propose a novel effective denoising framework(DED) using homogeneous dual events to generate the GT with better separating noise from the raw.
arXiv Detail & Related papers (2024-05-30T06:02:35Z) - Spike Stream Denoising via Spike Camera Simulation [64.11994763727631]
We propose a systematic noise model for spike camera based on its unique circuit.
The first benchmark for spike stream denoising is proposed which includes clear (noisy) spike stream.
Experiments show that DnSS has promising performance on the proposed benchmark.
arXiv Detail & Related papers (2023-04-06T14:59:48Z) - E-MLB: Multilevel Benchmark for Event-Based Camera Denoising [12.698543500397275]
Event cameras are more sensitive to junction leakage current and photocurrent as they output differential signals.
We construct a large-scale event denoising dataset (multilevel benchmark for event denoising, E-MLB) for the first time.
We also propose the first nonreference event denoising metric, the event structural ratio (ESR), which measures the structural intensity of given events.
arXiv Detail & Related papers (2023-03-21T16:31:53Z) - Blind2Sound: Self-Supervised Image Denoising without Residual Noise [5.192255321684027]
Self-supervised blind denoising for Poisson-Gaussian noise remains a challenging task.
We propose Blind2Sound, a simple yet effective approach to overcome residual noise in denoised images.
arXiv Detail & Related papers (2023-03-09T11:21:59Z) - Noise2NoiseFlow: Realistic Camera Noise Modeling without Clean Images [35.29066692454865]
This paper proposes a framework for training a noise model and a denoiser simultaneously.
It relies on pairs of noisy images rather than noisy/clean paired image data.
The trained denoiser is shown to significantly improve upon both supervised and weakly supervised baseline denoising approaches.
arXiv Detail & Related papers (2022-06-02T15:31:40Z) - C2N: Practical Generative Noise Modeling for Real-World Denoising [53.96391787869974]
We introduce a Clean-to-Noisy image generation framework, namely C2N, to imitate complex real-world noise without using paired examples.
We construct the noise generator in C2N accordingly with each component of real-world noise characteristics to express a wide range of noise accurately.
arXiv Detail & Related papers (2022-02-19T05:53:46Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Rethinking Noise Synthesis and Modeling in Raw Denoising [75.55136662685341]
We introduce a new perspective to synthesize noise by directly sampling from the sensor's real noise.
It inherently generates accurate raw image noise for different camera sensors.
arXiv Detail & Related papers (2021-10-10T10:45:24Z) - Physics-based Noise Modeling for Extreme Low-light Photography [63.65570751728917]
We study the noise statistics in the imaging pipeline of CMOS photosensors.
We formulate a comprehensive noise model that can accurately characterize the real noise structures.
Our noise model can be used to synthesize realistic training data for learning-based low-light denoising algorithms.
arXiv Detail & Related papers (2021-08-04T16:36:29Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.