EGVD: Event-Guided Video Deraining
- URL: http://arxiv.org/abs/2309.17239v1
- Date: Fri, 29 Sep 2023 13:47:53 GMT
- Title: EGVD: Event-Guided Video Deraining
- Authors: Yueyi Zhang, Jin Wang, Wenming Weng, Xiaoyan Sun, Zhiwei Xiong
- Abstract summary: We propose an end-to-end learning-based network to unlock the potential of the event camera for video deraining.
We build a real-world dataset consisting of rainy videos and temporally synchronized event streams.
- Score: 57.59935209162314
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid development of deep learning, video deraining has experienced
significant progress. However, existing video deraining pipelines cannot
achieve satisfying performance for scenes with rain layers of complex
spatio-temporal distribution. In this paper, we approach video deraining by
employing an event camera. As a neuromorphic sensor, the event camera suits
scenes of non-uniform motion and dynamic light conditions. We propose an
end-to-end learning-based network to unlock the potential of the event camera
for video deraining. First, we devise an event-aware motion detection module to
adaptively aggregate multi-frame motion contexts using event-aware masks.
Second, we design a pyramidal adaptive selection module for reliably separating
the background and rain layers by incorporating multi-modal contextualized
priors. In addition, we build a real-world dataset consisting of rainy videos
and temporally synchronized event streams. We compare our method with extensive
state-of-the-art methods on synthetic and self-collected real-world datasets,
demonstrating the clear superiority of our method. The code and dataset are
available at \url{https://github.com/booker-max/EGVD}.
Related papers
- EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting [76.02450110026747]
Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution.
We propose Event-Aided Free-Trajectory 3DGS, which seamlessly integrates the advantages of event cameras into 3DGS.
We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS.
arXiv Detail & Related papers (2024-10-20T13:44:24Z) - CMTA: Cross-Modal Temporal Alignment for Event-guided Video Deblurring [44.30048301161034]
Video deblurring aims to enhance the quality of restored results in motion-red videos by gathering information from adjacent video frames.
We propose two modules: 1) Intra-frame feature enhancement operates within the exposure time of a single blurred frame, and 2) Inter-frame temporal feature alignment gathers valuable long-range temporal information to target frames.
We demonstrate that our proposed methods outperform state-of-the-art frame-based and event-based motion deblurring methods through extensive experiments conducted on both synthetic and real-world deblurring datasets.
arXiv Detail & Related papers (2024-08-27T10:09:17Z) - RainMamba: Enhanced Locality Learning with State Space Models for Video Deraining [14.025870185802463]
We present an improved SSMs-based video deraining network (RainMamba) with a novel Hilbert mechanism to better capture sequence-level local information.
We also introduce a difference-guided dynamic contrastive locality learning strategy to enhance the patch-level self-similarity learning ability of the proposed network.
arXiv Detail & Related papers (2024-07-31T17:48:22Z) - Event-based Continuous Color Video Decompression from Single Frames [38.59798259847563]
We present ContinuityCam, a novel approach to generate a continuous video from a single static RGB image, using an event camera.
Our approach combines continuous long-range motion modeling with a feature-plane-based neural integration model, enabling frame prediction at arbitrary times within the events.
arXiv Detail & Related papers (2023-11-30T18:59:23Z) - DropMAE: Masked Autoencoders with Spatial-Attention Dropout for Tracking
Tasks [76.24996889649744]
Masked autoencoder (MAE) pretraining on videos for matching-based downstream tasks, including visual object tracking (VOT) and video object segmentation (VOS)
We propose DropMAE, which adaptively performs spatial-attention dropout in the frame reconstruction to facilitate temporal correspondence learning in videos.
Our model sets new state-of-the-art performance on 8 out of 9 highly competitive video tracking and segmentation datasets.
arXiv Detail & Related papers (2023-04-02T16:40:42Z) - Feature-Aligned Video Raindrop Removal with Temporal Constraints [68.49161092870224]
Raindrop removal is challenging for both single image and video.
Unlike rain streaks, adherent raindrops tend to cover the same area in several frames.
Our method employs a two-stage video-based raindrop removal method.
arXiv Detail & Related papers (2022-05-29T05:42:14Z) - Event-guided Deblurring of Unknown Exposure Time Videos [31.992673443516235]
Event cameras can capture apparent motion with a high temporal resolution.
We propose a novel Exposure Time-based Event Selection module to selectively use event features.
Our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-12-13T19:46:17Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.