Memory-guided Image De-raining Using Time-Lapse Data
- URL: http://arxiv.org/abs/2201.01883v1
- Date: Thu, 6 Jan 2022 01:36:59 GMT
- Title: Memory-guided Image De-raining Using Time-Lapse Data
- Authors: Jaehoon Cho, Seungryong Kim, Kwanghoon Sohn
- Abstract summary: We address the problem of single image de-raining, that is, the task of recovering clean and rain-free background scenes from a single image obscured by a rainy artifact.
We propose a novel network architecture based on a memory network that explicitly helps to capture long-term rain streak information in the time-lapse data.
- Score: 83.12497916664904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper addresses the problem of single image de-raining, that is, the
task of recovering clean and rain-free background scenes from a single image
obscured by a rainy artifact. Although recent advances adopt real-world
time-lapse data to overcome the need for paired rain-clean images, they are
limited to fully exploit the time-lapse data. The main cause is that, in terms
of network architectures, they could not capture long-term rain streak
information in the time-lapse data during training owing to the lack of memory
components. To address this problem, we propose a novel network architecture
based on a memory network that explicitly helps to capture long-term rain
streak information in the time-lapse data. Our network comprises the
encoder-decoder networks and a memory network. The features extracted from the
encoder are read and updated in the memory network that contains several memory
items to store rain streak-aware feature representations. With the read/update
operation, the memory network retrieves relevant memory items in terms of the
queries, enabling the memory items to represent the various rain streaks
included in the time-lapse data. To boost the discriminative power of memory
features, we also present a novel background selective whitening (BSW) loss for
capturing only rain streak information in the memory network by erasing the
background information. Experimental results on standard benchmarks demonstrate
the effectiveness and superiority of our approach.
Related papers
- A Prototype Unit for Image De-raining using Time-Lapse Data [9.37072441362836]
We address the challenge of single-image de-raining, a task that involves recovering rain-free background information from a single rain image.
We introduce a novel solution: the Rain Streak Prototype Unit (RsPU)
The RsPU efficiently encodes rain streak-relevant features as real-time prototypes derived from time-lapse data, eliminating the need for excessive memory resources.
arXiv Detail & Related papers (2024-12-27T05:04:56Z) - ReWind: Understanding Long Videos with Instructed Learnable Memory [8.002949551539297]
Vision-Language Models (VLMs) are crucial for applications requiring integrated understanding textual and visual information.
We introduce ReWind, a novel memory-based VLM designed for efficient long video understanding while preserving temporal fidelity.
We empirically demonstrate ReWind's superior performance in visual question answering (VQA) and temporal grounding tasks, surpassing previous methods on long video benchmarks.
arXiv Detail & Related papers (2024-11-23T13:23:22Z) - TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation [80.13343299606146]
We propose a Temporal LiDAR Aggregation and Distillation (TLAD) algorithm, which leverages historical priors to assign different aggregation steps for different classes.
To make full use of temporal images, we design a Temporal Image Aggregation and Fusion (TIAF) module, which can greatly expand the camera FOV.
We also develop a Static-Moving Switch Augmentation (SMSA) algorithm, which utilizes sufficient temporal information to enable objects to switch their motion states freely.
arXiv Detail & Related papers (2024-07-13T03:00:16Z) - Recurrent Dynamic Embedding for Video Object Segmentation [54.52527157232795]
We propose a Recurrent Dynamic Embedding (RDE) to build a memory bank of constant size.
We propose an unbiased guidance loss during the training stage, which makes SAM more robust in long videos.
We also design a novel self-correction strategy so that the network can repair the embeddings of masks with different qualities in the memory bank.
arXiv Detail & Related papers (2022-05-08T02:24:43Z) - Semi-DRDNet Semi-supervised Detail-recovery Image Deraining Network via
Unpaired Contrastive Learning [59.22620253308322]
We propose a semi-supervised detail-recovery image deraining network (termed as Semi-DRDNet)
As a semi-supervised learning paradigm, Semi-DRDNet operates smoothly on both synthetic and real-world rainy data in terms of deraining robustness and detail accuracy.
arXiv Detail & Related papers (2022-04-06T12:35:27Z) - Structure-Preserving Deraining with Residue Channel Prior Guidance [33.41254475191555]
Single image deraining is important for many high-level computer vision tasks.
We propose a Structure-Preserving Deraining Network (SPDNet) with RCP guidance.
SPDNet directly generates high-quality rain-free images with clear and accurate structures under RCP guidance.
arXiv Detail & Related papers (2021-08-20T09:09:56Z) - RCDNet: An Interpretable Rain Convolutional Dictionary Network for
Single Image Deraining [49.99207211126791]
We specifically build a novel deep architecture, called rain convolutional dictionary network (RCDNet)
RCDNet embeds the intrinsic priors of rain streaks and has clear interpretability.
By end-to-end training such an interpretable network, all involved rain kernels and proximal operators can be automatically extracted.
arXiv Detail & Related papers (2021-07-14T16:08:11Z) - Beyond Monocular Deraining: Parallel Stereo Deraining Network Via
Semantic Prior [103.49307603952144]
Most existing de-rain algorithms use only one single input image and aim to recover a clean image.
We present a Paired Rain Removal Network (PRRNet), which exploits both stereo images and semantic information.
Experiments on both monocular and the newly proposed stereo rainy datasets demonstrate that the proposed method achieves the state-of-the-art performance.
arXiv Detail & Related papers (2021-05-09T04:15:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.