Towards Better De-raining Generalization via Rainy Characteristics Memorization and Replay
- URL: http://arxiv.org/abs/2506.02477v1
- Date: Tue, 03 Jun 2025 05:50:00 GMT
- Title: Towards Better De-raining Generalization via Rainy Characteristics Memorization and Replay
- Authors: Kunyu Wang, Xueyang Fu, Chengzhi Cao, Chengjie Ge, Wei Zhai, Zheng-Jun Zha,
- Abstract summary: Current image de-raining methods primarily learn from a limited dataset.<n>We introduce a new framework that enables networks to progressively expand their de-raining knowledge base.
- Score: 74.54047495424618
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current image de-raining methods primarily learn from a limited dataset, leading to inadequate performance in varied real-world rainy conditions. To tackle this, we introduce a new framework that enables networks to progressively expand their de-raining knowledge base by tapping into a growing pool of datasets, significantly boosting their adaptability. Drawing inspiration from the human brain's ability to continuously absorb and generalize from ongoing experiences, our approach borrow the mechanism of the complementary learning system. Specifically, we first deploy Generative Adversarial Networks (GANs) to capture and retain the unique features of new data, mirroring the hippocampus's role in learning and memory. Then, the de-raining network is trained with both existing and GAN-synthesized data, mimicking the process of hippocampal replay and interleaved learning. Furthermore, we employ knowledge distillation with the replayed data to replicate the synergy between the neocortex's activity patterns triggered by hippocampal replays and the pre-existing neocortical knowledge. This comprehensive framework empowers the de-raining network to amass knowledge from various datasets, continually enhancing its performance on previously unseen rainy scenes. Our testing on three benchmark de-raining networks confirms the framework's effectiveness. It not only facilitates continuous knowledge accumulation across six datasets but also surpasses state-of-the-art methods in generalizing to new real-world scenarios.
Related papers
- Self-Regulated Neurogenesis for Online Data-Incremental Learning [9.254419196812233]
SERENA encodes each concept in a specialized network path called 'concept cell'<n>Once a concept is learned, its corresponding concept cell is frozen, effectively preventing the forgetting of previously acquired information.<n> Experimental results show that our method not only establishes new state-of-the-art results across ten benchmarks but also remarkably surpasses offline supervised batch learning performance.
arXiv Detail & Related papers (2024-03-13T13:51:12Z) - Continual All-in-One Adverse Weather Removal with Knowledge Replay on a
Unified Network Structure [92.8834309803903]
In real-world applications, image degeneration caused by adverse weather is always complex and changes with different weather conditions from days and seasons.
We develop a novel continual learning framework with effective knowledge replay (KR) on a unified network structure.
It considers the characteristics of the image restoration task with multiple degenerations in continual learning, and the knowledge for different degenerations can be shared and accumulated.
arXiv Detail & Related papers (2024-03-12T03:50:57Z) - Removing Rain Streaks via Task Transfer Learning [39.511454098771026]
We first statistically explore why the supervised deraining models cannot generalize well to real rainy cases.
In connected tasks, the label for real data can be easily obtained.
Our core idea is to learn representations from real data through task transfer to improve deraining generalization.
arXiv Detail & Related papers (2022-08-28T03:32:17Z) - Transfer Learning with Deep Tabular Models [66.67017691983182]
We show that upstream data gives tabular neural networks a decisive advantage over GBDT models.
We propose a realistic medical diagnosis benchmark for tabular transfer learning.
We propose a pseudo-feature method for cases where the upstream and downstream feature sets differ.
arXiv Detail & Related papers (2022-06-30T14:24:32Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - A Wholistic View of Continual Learning with Deep Neural Networks:
Forgotten Lessons and the Bridge to Active and Open World Learning [8.188575923130662]
We argue that notable lessons from open set recognition, the identification of statistically deviating data outside of the observed dataset, and the adjacent field of active learning, are frequently overlooked in the deep learning era.
Our results show that this not only benefits each individual paradigm, but highlights the natural synergies in a common framework.
arXiv Detail & Related papers (2020-09-03T16:56:36Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Automatic Recall Machines: Internal Replay, Continual Learning and the
Brain [104.38824285741248]
Replay in neural networks involves training on sequential data with memorized samples, which counteracts forgetting of previous behavior caused by non-stationarity.
We present a method where these auxiliary samples are generated on the fly, given only the model that is being trained for the assessed objective.
Instead the implicit memory of learned samples within the assessed model itself is exploited.
arXiv Detail & Related papers (2020-06-22T15:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.