Semi-Supervised State-Space Model with Dynamic Stacking Filter for Real-World Video Deraining
- URL: http://arxiv.org/abs/2505.16811v1
- Date: Thu, 22 May 2025 15:50:00 GMT
- Title: Semi-Supervised State-Space Model with Dynamic Stacking Filter for Real-World Video Deraining
- Authors: Shangquan Sun, Wenqi Ren, Juxiang Zhou, Shu Wang, Jianhou Gan, Xiaochun Cao,
- Abstract summary: We propose a dual-branch-temporal state-space model to enhance rain streak removal in video sequences.<n>To improve multi-frame feature fusion, we derive a dynamic filter stacking, which adaptively approximates statistical filters for pixel-wise feature refinement.<n>To further explore the capacity of deraining models in supporting other vision-based tasks in rainy environments, we introduce a novel real-world benchmark.
- Score: 73.5575992346396
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Significant progress has been made in video restoration under rainy conditions over the past decade, largely propelled by advancements in deep learning. Nevertheless, existing methods that depend on paired data struggle to generalize effectively to real-world scenarios, primarily due to the disparity between synthetic and authentic rain effects. To address these limitations, we propose a dual-branch spatio-temporal state-space model to enhance rain streak removal in video sequences. Specifically, we design spatial and temporal state-space model layers to extract spatial features and incorporate temporal dependencies across frames, respectively. To improve multi-frame feature fusion, we derive a dynamic stacking filter, which adaptively approximates statistical filters for superior pixel-wise feature refinement. Moreover, we develop a median stacking loss to enable semi-supervised learning by generating pseudo-clean patches based on the sparsity prior of rain. To further explore the capacity of deraining models in supporting other vision-based tasks in rainy environments, we introduce a novel real-world benchmark focused on object detection and tracking in rainy conditions. Our method is extensively evaluated across multiple benchmarks containing numerous synthetic and real-world rainy videos, consistently demonstrating its superiority in quantitative metrics, visual quality, efficiency, and its utility for downstream tasks.
Related papers
- SpikeDerain: Unveiling Clear Videos from Rainy Sequences Using Color Spike Streams [49.34425133546994]
Restoring clear frames from rainy videos presents a significant challenge due to the rapid motion of rain streaks.<n>Traditional frame-based visual sensors, which capture scene content synchronously, struggle to capture the fast-moving details of rain accurately.<n>We propose a Color Spike Stream Deraining Network (SpikeDerain), capable of reconstructing spike streams of dynamic scenes and accurately removing rain streaks.
arXiv Detail & Related papers (2025-03-26T08:28:28Z) - RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining [14.025870185802463]
We present an improved SSMs-based video deraining network (RainMamba) with a novel Hilbert mechanism to better capture sequence-level local information.
We also introduce a difference-guided dynamic contrastive locality learning strategy to enhance the patch-level self-similarity learning ability of the proposed network.
arXiv Detail & Related papers (2024-07-31T17:48:22Z) - RainyScape: Unsupervised Rainy Scene Reconstruction using Decoupled Neural Rendering [50.14860376758962]
We propose RainyScape, an unsupervised framework for reconstructing clean scenes from a collection of multi-view rainy images.
Based on the spectral bias property of neural networks, we first optimize the neural rendering pipeline to obtain a low-frequency scene representation.
We jointly optimize the two modules, driven by the proposed adaptive direction-sensitive gradient-based reconstruction loss.
arXiv Detail & Related papers (2024-04-17T14:07:22Z) - Rethinking Real-world Image Deraining via An Unpaired Degradation-Conditioned Diffusion Model [51.49854435403139]
We propose RainDiff, the first real-world image deraining paradigm based on diffusion models.
We introduce a stable and non-adversarial unpaired cycle-consistent architecture that can be trained, end-to-end, with only unpaired data for supervision.
We also propose a degradation-conditioned diffusion model that refines the desired output via a diffusive generative process conditioned by learned priors of multiple rain degradations.
arXiv Detail & Related papers (2023-01-23T13:34:01Z) - Uncertainty-Aware Cascaded Dilation Filtering for High-Efficiency
Deraining [25.669665033163497]
Deraining is a significant and fundamental computer vision task, aiming to remove the rain streaks and accumulations in an image or video captured under a rainy day.
Existing deraining methods usually make assumptions of the rain model, which compels them to employ complex optimization or iterative refinement for high recovery quality.
We propose a simple yet efficient deraining method by formulating deraining as a predictive filtering problem without complex rain model assumptions.
arXiv Detail & Related papers (2022-01-07T08:31:57Z) - Semi-Supervised Video Deraining with Dynamic Rain Generator [59.71640025072209]
This paper proposes a new semi-supervised video deraining method, in which a dynamic rain generator is employed to fit the rain layer.
Specifically, such dynamic generator consists of one emission model and one transition model to simultaneously encode the spatially physical structure and temporally continuous changes of rain streaks.
Various prior formats are designed for the labeled synthetic and unlabeled real data, so as to fully exploit the common knowledge underlying them.
arXiv Detail & Related papers (2021-03-14T14:28:57Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Rain Streak Removal in a Video to Improve Visibility by TAWL Algorithm [12.056495277232118]
We propose a novel method by combining three novel extracted features focusing on temporal appearance, wide shape and relative location of the rain streak.
The proposed TAWL method adaptively uses features from different resolutions and frame rates to remove rain in the real-time.
The experiments have been conducted using video sequences with both real rains and synthetic rains to compare the performance of the proposed method against the relevant state-of-the-art methods.
arXiv Detail & Related papers (2020-07-10T05:07:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.