Simultaneous Motion And Noise Estimation with Event Cameras
- URL: http://arxiv.org/abs/2504.04029v1
- Date: Sat, 05 Apr 2025 02:47:40 GMT
- Title: Simultaneous Motion And Noise Estimation with Event Cameras
- Authors: Shintaro Shiba, Yoshimitsu Aoki, Guillermo Gallego,
- Abstract summary: Event cameras are emerging vision sensors, whose noise is challenging to characterize.<n>Existing denoising methods for event cameras consider other tasks such as motion estimation separately.<n>This work proposes, to the best of our knowledge, the first method that simultaneously estimates motion in its various forms.
- Score: 18.2247510082534
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event cameras are emerging vision sensors, whose noise is challenging to characterize. Existing denoising methods for event cameras consider other tasks such as motion estimation separately (i.e., sequentially after denoising). However, motion is an intrinsic part of event data, since scene edges cannot be sensed without motion. This work proposes, to the best of our knowledge, the first method that simultaneously estimates motion in its various forms (e.g., ego-motion, optical flow) and noise. The method is flexible, as it allows replacing the 1-step motion estimation of the widely-used Contrast Maximization framework with any other motion estimator, such as deep neural networks. The experiments show that the proposed method achieves state-of-the-art results on the E-MLB denoising benchmark and competitive results on the DND21 benchmark, while showing its efficacy on motion estimation and intensity reconstruction tasks. We believe that the proposed approach contributes to strengthening the theory of event-data denoising, as well as impacting practical denoising use-cases, as we release the code upon acceptance. Project page: https://github.com/tub-rip/ESMD
Related papers
- Iterative Event-based Motion Segmentation by Variational Contrast Maximization [16.68279129685]
Event cameras provide rich signals that are suitable for motion estimation since they respond to changes in the scene.
We propose an iterative motion segmentation method, by classifying events into background (e.g., dominant motion hypothesis) and foreground (independent motion residuals)
Experimental results demonstrate that the proposed method successfully classifies event clusters both for public and self-recorded datasets.
arXiv Detail & Related papers (2025-04-25T16:00:23Z) - Combining Pre- and Post-Demosaicking Noise Removal for RAW Video [2.772895608190934]
Denoising is one of the fundamental steps of the processing pipeline that converts data captured by a camera sensor into a display-ready image or video.<n>We propose a self-similarity-based denoising scheme that weights both a pre- and a post-demosaicking denoiser for Bayer-patterned CFA video data.<n>We show that a balance between the two leads to better image quality, and we empirically find that higher noise levels benefit from a higher influence pre-demosaicking.
arXiv Detail & Related papers (2024-10-03T15:20:19Z) - A Label-Free and Non-Monotonic Metric for Evaluating Denoising in Event Cameras [17.559229117246666]
Event cameras are renowned for their high efficiency due to outputting a sparse, asynchronous stream of events.
Denoising is an essential task for event cameras, but evaluating denoising performance is challenging.
We propose the first label-free and non-monotonic evaluation metric, the area of the continuous contrast curve (AOCC)
arXiv Detail & Related papers (2024-06-13T08:12:48Z) - DeNoising-MOT: Towards Multiple Object Tracking with Severe Occlusions [52.63323657077447]
We propose DNMOT, an end-to-end trainable DeNoising Transformer for multiple object tracking.
Specifically, we augment the trajectory with noises during training and make our model learn the denoising process in an encoder-decoder architecture.
We conduct extensive experiments on the MOT17, MOT20, and DanceTrack datasets, and the experimental results show that our method outperforms previous state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2023-09-09T04:40:01Z) - DiffTAD: Temporal Action Detection with Proposal Denoising Diffusion [137.8749239614528]
We propose a new formulation of temporal action detection (TAD) with denoising diffusion, DiffTAD.
Taking as input random temporal proposals, it can yield action proposals accurately given an untrimmed long video.
arXiv Detail & Related papers (2023-03-27T00:40:52Z) - E-MLB: Multilevel Benchmark for Event-Based Camera Denoising [12.698543500397275]
Event cameras are more sensitive to junction leakage current and photocurrent as they output differential signals.
We construct a large-scale event denoising dataset (multilevel benchmark for event denoising, E-MLB) for the first time.
We also propose the first nonreference event denoising metric, the event structural ratio (ESR), which measures the structural intensity of given events.
arXiv Detail & Related papers (2023-03-21T16:31:53Z) - Event-based Camera Simulation using Monte Carlo Path Tracing with
Adaptive Denoising [10.712584582512811]
Event-based video can be viewed as a process of detecting the changes from noisy brightness values.
We extend a denoising method based on a weighted local regression to detect the brightness changes.
arXiv Detail & Related papers (2023-03-05T08:44:01Z) - HumanMAC: Masked Motion Completion for Human Motion Prediction [62.279925754717674]
Human motion prediction is a classical problem in computer vision and computer graphics.
Previous effects achieve great empirical performance based on an encoding-decoding style.
In this paper, we propose a novel framework from a new perspective.
arXiv Detail & Related papers (2023-02-07T18:34:59Z) - ProgressiveMotionSeg: Mutually Reinforced Framework for Event-Based
Motion Segmentation [101.19290845597918]
This paper presents a Motion Estimation (ME) module and an Event Denoising (ED) module jointly optimized in a mutually reinforced manner.
Taking temporal correlation as guidance, ED module calculates the confidence that each event belongs to real activity events, and transmits it to ME module to update energy function of motion segmentation for noise suppression.
arXiv Detail & Related papers (2022-03-22T13:40:26Z) - Rethinking Noise Synthesis and Modeling in Raw Denoising [75.55136662685341]
We introduce a new perspective to synthesize noise by directly sampling from the sensor's real noise.
It inherently generates accurate raw image noise for different camera sensors.
arXiv Detail & Related papers (2021-10-10T10:45:24Z) - Noise2Same: Optimizing A Self-Supervised Bound for Image Denoising [54.730707387866076]
We introduce Noise2Same, a novel self-supervised denoising framework.
In particular, Noise2Same requires neither J-invariance nor extra information about the noise model.
Our results show that our Noise2Same remarkably outperforms previous self-supervised denoising methods.
arXiv Detail & Related papers (2020-10-22T18:12:26Z) - Motion-Excited Sampler: Video Adversarial Attack with Sparked Prior [63.11478060678794]
We propose an effective motion-excited sampler to obtain motion-aware noise prior.
By using the sparked prior in gradient estimation, we can successfully attack a variety of video classification models with fewer number of queries.
arXiv Detail & Related papers (2020-03-17T10:54:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.