Gradient-Guided Parameter Mask for Multi-Scenario Image Restoration Under Adverse Weather
- URL: http://arxiv.org/abs/2411.16739v1
- Date: Sat, 23 Nov 2024 16:16:27 GMT
- Title: Gradient-Guided Parameter Mask for Multi-Scenario Image Restoration Under Adverse Weather
- Authors: Jilong Guo, Haobo Yang, Mo Zhou, Xinyu Zhang,
- Abstract summary: We propose a novel Gradient-Guided Scenario Mask for Multi-Scenario Image Restoration under adverse weather.
Our method segments model parameters into common and specific components by evaluating the gradient variation intensity.
This enables the model to precisely and adaptively learn relevant features for each weather scenario, improving both efficiency and effectiveness without compromising on performance.
- Score: 16.956703689174198
- License:
- Abstract: Removing adverse weather conditions such as rain, raindrop, and snow from images is critical for various real-world applications, including autonomous driving, surveillance, and remote sensing. However, existing multi-task approaches typically rely on augmenting the model with additional parameters to handle multiple scenarios. While this enables the model to address diverse tasks, the introduction of extra parameters significantly complicates its practical deployment. In this paper, we propose a novel Gradient-Guided Parameter Mask for Multi-Scenario Image Restoration under adverse weather, designed to effectively handle image degradation under diverse weather conditions without additional parameters. Our method segments model parameters into common and specific components by evaluating the gradient variation intensity during training for each specific weather condition. This enables the model to precisely and adaptively learn relevant features for each weather scenario, improving both efficiency and effectiveness without compromising on performance. This method constructs specific masks based on gradient fluctuations to isolate parameters influenced by other tasks, ensuring that the model achieves strong performance across all scenarios without adding extra parameters. We demonstrate the state-of-the-art performance of our framework through extensive experiments on multiple benchmark datasets. Specifically, our method achieves PSNR scores of 29.22 on the Raindrop dataset, 30.76 on the Rain dataset, and 29.56 on the Snow100K dataset. Code is available at: \href{https://github.com/AierLab/MultiTask}{https://github.com/AierLab/MultiTask}.
Related papers
- Multi-Weather Image Restoration via Histogram-Based Transformer Feature Enhancement [14.986500375481546]
In adverse weather conditions, single-weather restoration models struggle to meet practical demands.
There is an urgent need for a model capable of effectively handling mixed weather conditions and enhancing image quality in an automated manner.
We propose a Task Sequence Generator module that, in conjunction with the Task Intra-patch Block, effectively extracts task-specific features embedded in degraded images.
arXiv Detail & Related papers (2024-09-10T08:47:03Z) - Multiple weather images restoration using the task transformer and adaptive mixup strategy [14.986500375481546]
We introduce a novel multi-task severe weather removal model that can effectively handle complex weather conditions in an adaptive manner.
Our model incorporates a weather task sequence generator, enabling the self-attention mechanism to selectively focus on features specific to different weather types.
Our proposed model has achieved state-of-the-art performance on the publicly available dataset.
arXiv Detail & Related papers (2024-09-05T04:55:40Z) - Dynamic Adapter Meets Prompt Tuning: Parameter-Efficient Transfer Learning for Point Cloud Analysis [51.14136878142034]
Point cloud analysis has achieved outstanding performance by transferring point cloud pre-trained models.
Existing methods for model adaptation usually update all model parameters, which is inefficient as it relies on high computational costs.
In this paper, we aim to study parameter-efficient transfer learning for point cloud analysis with an ideal trade-off between task performance and parameter efficiency.
arXiv Detail & Related papers (2024-03-03T08:25:04Z) - Exploring the Application of Large-scale Pre-trained Models on Adverse
Weather Removal [97.53040662243768]
We propose a CLIP embedding module to make the network handle different weather conditions adaptively.
This module integrates the sample specific weather prior extracted by CLIP image encoder together with the distribution specific information learned by a set of parameters.
arXiv Detail & Related papers (2023-06-15T10:06:13Z) - Counting Crowds in Bad Weather [68.50690406143173]
We propose a method for robust crowd counting in adverse weather scenarios.
Our model learns effective features and adaptive queries to account for large appearance variations.
Experimental results show that the proposed algorithm is effective in counting crowds under different weather types on benchmark datasets.
arXiv Detail & Related papers (2023-06-02T00:00:09Z) - Sit Back and Relax: Learning to Drive Incrementally in All Weather
Conditions [16.014293219912]
In autonomous driving scenarios, current object detection models show strong performance when tested in clear weather.
We propose Domain-Incremental Learning through Activation Matching (DILAM) to adapt only the affine parameters of a clear weather pre-trained network to different weather conditions.
Our memory bank is extremely lightweight, since affine parameters account for less than 2% of a typical object detector.
arXiv Detail & Related papers (2023-05-30T11:37:41Z) - An Efficient Domain-Incremental Learning Approach to Drive in All
Weather Conditions [8.436505917796174]
Deep neural networks enable impressive visual perception performance for autonomous driving.
They are prone to forgetting previously learned information when adapting to different weather conditions.
We propose DISC -- Domain Incremental through Statistical Correction -- a simple zero-forgetting approach which can incrementally learn new tasks.
arXiv Detail & Related papers (2022-04-19T11:39:20Z) - From Rain Generation to Rain Removal [67.71728610434698]
We build a full Bayesian generative model for rainy image where the rain layer is parameterized as a generator.
We employ the variational inference framework to approximate the expected statistical distribution of rainy image.
Comprehensive experiments substantiate that the proposed model can faithfully extract the complex rain distribution.
arXiv Detail & Related papers (2020-08-08T18:56:51Z) - Conditional Variational Image Deraining [158.76814157115223]
Conditional Variational Image Deraining (CVID) network for better deraining performance.
We propose a spatial density estimation (SDE) module to estimate a rain density map for each image.
Experiments on synthesized and real-world datasets show that the proposed CVID network achieves much better performance than previous deterministic methods on image deraining.
arXiv Detail & Related papers (2020-04-23T11:51:38Z) - Multi-Scale Progressive Fusion Network for Single Image Deraining [84.0466298828417]
Rain streaks in the air appear in various blurring degrees and resolutions due to different distances from their positions to the camera.
Similar rain patterns are visible in a rain image as well as its multi-scale (or multi-resolution) versions.
In this work, we explore the multi-scale collaborative representation for rain streaks from the perspective of input image scales and hierarchical deep features.
arXiv Detail & Related papers (2020-03-24T17:22:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.