A Label-Free and Non-Monotonic Metric for Evaluating Denoising in Event Cameras
- URL: http://arxiv.org/abs/2406.08909v1
- Date: Thu, 13 Jun 2024 08:12:48 GMT
- Title: A Label-Free and Non-Monotonic Metric for Evaluating Denoising in Event Cameras
- Authors: Chenyang Shi, Shasha Guo, Boyi Wei, Hanxiao Liu, Yibo Zhang, Ningfang Song, Jing Jin,
- Abstract summary: Event cameras are renowned for their high efficiency due to outputting a sparse, asynchronous stream of events.
Denoising is an essential task for event cameras, but evaluating denoising performance is challenging.
We propose the first label-free and non-monotonic evaluation metric, the area of the continuous contrast curve (AOCC)
- Score: 17.559229117246666
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are renowned for their high efficiency due to outputting a sparse, asynchronous stream of events. However, they are plagued by noisy events, especially in low light conditions. Denoising is an essential task for event cameras, but evaluating denoising performance is challenging. Label-dependent denoising metrics involve artificially adding noise to clean sequences, complicating evaluations. Moreover, the majority of these metrics are monotonic, which can inflate scores by removing substantial noise and valid events. To overcome these limitations, we propose the first label-free and non-monotonic evaluation metric, the area of the continuous contrast curve (AOCC), which utilizes the area enclosed by event frame contrast curves across different time intervals. This metric is inspired by how events capture the edge contours of scenes or objects with high temporal resolution. An effective denoising method removes noise without eliminating these edge-contour events, thus preserving the contrast of event frames. Consequently, contrast across various time ranges serves as a metric to assess denoising effectiveness. As the time interval lengthens, the curve will initially rise and then fall. The proposed metric is validated through both theoretical and experimental evidence.
Related papers
- LED: A Large-scale Real-world Paired Dataset for Event Camera Denoising [19.51468512911655]
Event camera has significant advantages in capturing dynamic scene information while being prone to noise interference.
We construct a new paired real-world event denoising dataset (LED), including 3K sequences with 18K seconds of high-resolution (1200*680) event streams.
We propose a novel effective denoising framework(DED) using homogeneous dual events to generate the GT with better separating noise from the raw.
arXiv Detail & Related papers (2024-05-30T06:02:35Z) - Noise2Image: Noise-Enabled Static Scene Recovery for Event Cameras [2.630755581216441]
Event cameras capture changes of intensity over time as a stream of 'events'
We propose a method, called Noise2Image, to leverage the illuminance-dependent noise characteristics to recover the static parts of a scene.
Our results show that Noise2Image can robustly recover intensity images solely from noise events.
arXiv Detail & Related papers (2024-04-01T17:59:53Z) - Advancing Unsupervised Low-light Image Enhancement: Noise Estimation, Illumination Interpolation, and Self-Regulation [55.07472635587852]
Low-Light Image Enhancement (LLIE) techniques have made notable advancements in preserving image details and enhancing contrast.
These approaches encounter persistent challenges in efficiently mitigating dynamic noise and accommodating diverse low-light scenarios.
We first propose a method for estimating the noise level in low light images in a quick and accurate way.
We then devise a Learnable Illumination Interpolator (LII) to satisfy general constraints between illumination and input.
arXiv Detail & Related papers (2023-05-17T13:56:48Z) - Recovering Continuous Scene Dynamics from A Single Blurry Image with
Events [58.7185835546638]
An Implicit Video Function (IVF) is learned to represent a single motion blurred image with concurrent events.
A dual attention transformer is proposed to efficiently leverage merits from both modalities.
The proposed network is trained only with the supervision of ground-truth images of limited referenced timestamps.
arXiv Detail & Related papers (2023-04-05T18:44:17Z) - E-MLB: Multilevel Benchmark for Event-Based Camera Denoising [12.698543500397275]
Event cameras are more sensitive to junction leakage current and photocurrent as they output differential signals.
We construct a large-scale event denoising dataset (multilevel benchmark for event denoising, E-MLB) for the first time.
We also propose the first nonreference event denoising metric, the event structural ratio (ESR), which measures the structural intensity of given events.
arXiv Detail & Related papers (2023-03-21T16:31:53Z) - IDR: Self-Supervised Image Denoising via Iterative Data Refinement [66.5510583957863]
We present a practical unsupervised image denoising method to achieve state-of-the-art denoising performance.
Our method only requires single noisy images and a noise model, which is easily accessible in practical raw image denoising.
To evaluate raw image denoising performance in real-world applications, we build a high-quality raw image dataset SenseNoise-500 that contains 500 real-life scenes.
arXiv Detail & Related papers (2021-11-29T07:22:53Z) - Noise2Kernel: Adaptive Self-Supervised Blind Denoising using a Dilated
Convolutional Kernel Architecture [3.796436257221662]
We propose a dilated convolutional network that satisfies an invariant property, allowing efficient kernel-based training without random masking.
We also propose an adaptive self-supervision loss to circumvent the requirement of zero-mean constraint, which is specifically effective in removing salt-and-pepper or hybrid noise.
arXiv Detail & Related papers (2020-12-07T12:13:17Z) - Adaptive noise imitation for image denoising [58.21456707617451]
We develop a new textbfadaptive noise imitation (ADANI) algorithm that can synthesize noisy data from naturally noisy images.
To produce realistic noise, a noise generator takes unpaired noisy/clean images as input, where the noisy image is a guide for noise generation.
Coupling the noisy data output from ADANI with the corresponding ground-truth, a denoising CNN is then trained in a fully-supervised manner.
arXiv Detail & Related papers (2020-11-30T02:49:36Z) - Improving Blind Spot Denoising for Microscopy [73.94017852757413]
We present a novel way to improve the quality of self-supervised denoising.
We assume the clean image to be the result of a convolution with a point spread function (PSF) and explicitly include this operation at the end of our neural network.
arXiv Detail & Related papers (2020-08-19T13:06:24Z) - Learning Model-Blind Temporal Denoisers without Ground Truths [46.778450578529814]
Denoisers trained with synthetic data often fail to cope with the diversity of unknown noises.
Previous image-based method leads to noise overfitting if directly applied to video denoisers.
We propose a general framework for video denoising networks that successfully addresses these challenges.
arXiv Detail & Related papers (2020-07-07T07:19:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.