PRE-Mamba: A 4D State Space Model for Ultra-High-Frequent Event Camera Deraining
- URL: http://arxiv.org/abs/2505.05307v2
- Date: Tue, 05 Aug 2025 07:20:24 GMT
- Title: PRE-Mamba: A 4D State Space Model for Ultra-High-Frequent Event Camera Deraining
- Authors: Ciyu Ruan, Ruishan Guo, Zihang Gong, Jingao Xu, Wenhan Yang, Xinlei Chen,
- Abstract summary: Event cameras excel in high temporal resolution and dynamic range but suffer from dense noise in rainy conditions.<n>We propose PRE-Mamba, a novel point-based camera framework for event deraining.
- Score: 47.81253972389206
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras excel in high temporal resolution and dynamic range but suffer from dense noise in rainy conditions. Existing event deraining methods face trade-offs between temporal precision, deraining effectiveness, and computational efficiency. In this paper, we propose PRE-Mamba, a novel point-based event camera deraining framework that fully exploits the spatiotemporal characteristics of raw event and rain. Our framework introduces a 4D event cloud representation that integrates dual temporal scales to preserve high temporal precision, a Spatio-Temporal Decoupling and Fusion module (STDF) that enhances deraining capability by enabling shallow decoupling and interaction of temporal and spatial information, and a Multi-Scale State Space Model (MS3M) that captures deeper rain dynamics across dual-temporal and multi-spatial scales with linear computational complexity. Enhanced by frequency-domain regularization, PRE-Mamba achieves superior performance (0.95 SR, 0.91 NR, and 0.4s/M events) with only 0.26M parameters on EventRain-27K, a comprehensive dataset with labeled synthetic and real-world sequences. Moreover, our method generalizes well across varying rain intensities, viewpoints, and even snowy conditions.
Related papers
- Semi-Supervised State-Space Model with Dynamic Stacking Filter for Real-World Video Deraining [73.5575992346396]
We propose a dual-branch-temporal state-space model to enhance rain streak removal in video sequences.<n>To improve multi-frame feature fusion, we derive a dynamic filter stacking, which adaptively approximates statistical filters for pixel-wise feature refinement.<n>To further explore the capacity of deraining models in supporting other vision-based tasks in rainy environments, we introduce a novel real-world benchmark.
arXiv Detail & Related papers (2025-05-22T15:50:00Z) - EDmamba: A Simple yet Effective Event Denoising Method with State Space Model [20.776133942771768]
Event cameras excel in high-speed vision due to their high temporal dynamic range, and low power consumption.<n>As dynamic vision sensors, their output is inherently noisy, making efficient denoising essential to preserve their ultra-low latency and real-time processing capabilities.<n>We propose a novel event denoising framework based on State Space Models (SSMs)
arXiv Detail & Related papers (2025-05-08T16:27:27Z) - SpikeDerain: Unveiling Clear Videos from Rainy Sequences Using Color Spike Streams [49.34425133546994]
Restoring clear frames from rainy videos presents a significant challenge due to the rapid motion of rain streaks.<n>Traditional frame-based visual sensors, which capture scene content synchronously, struggle to capture the fast-moving details of rain accurately.<n>We propose a Color Spike Stream Deraining Network (SpikeDerain), capable of reconstructing spike streams of dynamic scenes and accurately removing rain streaks.
arXiv Detail & Related papers (2025-03-26T08:28:28Z) - A Prototype Unit for Image De-raining using Time-Lapse Data [9.37072441362836]
We address the challenge of single-image de-raining, a task that involves recovering rain-free background information from a single rain image.<n>We introduce a novel solution: the Rain Streak Prototype Unit (RsPU)<n>The RsPU efficiently encodes rain streak-relevant features as real-time prototypes derived from time-lapse data, eliminating the need for excessive memory resources.
arXiv Detail & Related papers (2024-12-27T05:04:56Z) - Two-stage Rainfall-Forecasting Diffusion Model [1.6005657281443229]
TRDM is a two-stage method for rainfall prediction tasks.
The first stage is to capture robust temporal information while preserving spatial information under low-resolution conditions.
The second stage is to reconstruct the low-resolution images generated in the first stage into high-resolution images.
arXiv Detail & Related papers (2024-02-20T07:37:32Z) - An Event-Oriented Diffusion-Refinement Method for Sparse Events
Completion [36.64856578682197]
Event cameras or dynamic vision sensors (DVS) record asynchronous response to brightness changes instead of conventional intensity frames.
We propose an inventive event completion sequence approach conforming to unique characteristics of event data in both the processing stage and the output form.
Specifically, we treat event streams as 3D event clouds in thetemporal domain, develop a diffusion-based generative model to generate dense clouds in a coarse-to-fine manner, and recover exact timestamps to maintain the temporal resolution of raw data successfully.
arXiv Detail & Related papers (2024-01-06T08:09:54Z) - Learning Robust Precipitation Forecaster by Temporal Frame Interpolation [65.5045412005064]
We develop a robust precipitation forecasting model that demonstrates resilience against spatial-temporal discrepancies.
Our approach has led to significant improvements in forecasting precision, culminating in our model securing textit1st place in the transfer learning leaderboard of the textitWeather4cast'23 competition.
arXiv Detail & Related papers (2023-11-30T08:22:08Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.