IRConStyle: Image Restoration Framework Using Contrastive Learning and
Style Transfer
- URL: http://arxiv.org/abs/2402.15784v3
- Date: Thu, 7 Mar 2024 11:00:02 GMT
- Title: IRConStyle: Image Restoration Framework Using Contrastive Learning and
Style Transfer
- Authors: Dongqi Fan, Xin Zhao, Liang Chang
- Abstract summary: We propose a novel module for image restoration called textbfConStyle, which can be efficiently integrated into any U-Net structure network.
We perform extensive experiments on various image restoration tasks, including denoising, deraining, and dehazing.
The results on 19 benchmarks demonstrate that ConStyle can be integrated with any U-Net-based network and significantly enhance performance.
- Score: 5.361977985410345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the contrastive learning paradigm has achieved remarkable success
in high-level tasks such as classification, detection, and segmentation.
However, contrastive learning applied in low-level tasks, like image
restoration, is limited, and its effectiveness is uncertain. This raises a
question: Why does the contrastive learning paradigm not yield satisfactory
results in image restoration? In this paper, we conduct in-depth analyses and
propose three guidelines to address the above question. In addition, inspired
by style transfer and based on contrastive learning, we propose a novel module
for image restoration called \textbf{ConStyle}, which can be efficiently
integrated into any U-Net structure network. By leveraging the flexibility of
ConStyle, we develop a \textbf{general restoration network} for image
restoration. ConStyle and the general restoration network together form an
image restoration framework, namely \textbf{IRConStyle}. To demonstrate the
capability and compatibility of ConStyle, we replace the general restoration
network with transformer-based, CNN-based, and MLP-based networks,
respectively. We perform extensive experiments on various image restoration
tasks, including denoising, deblurring, deraining, and dehazing. The results on
19 benchmarks demonstrate that ConStyle can be integrated with any U-Net-based
network and significantly enhance performance. For instance, ConStyle NAFNet
significantly outperforms the original NAFNet on SOTS outdoor (dehazing) and
Rain100H (deraining) datasets, with PSNR improvements of 4.16 dB and 3.58 dB
with 85% fewer parameters.
Related papers
- Cat-AIR: Content and Task-Aware All-in-One Image Restoration [50.46278224313221]
Cat-AIR is a novel framework for textbfAnd textbfTask-aware framework for textbfImage textbfRestoration.
Cat-AIR incorporates an alternating spatial-channel attention mechanism that adaptively balances the local and global information for different tasks.
Experiments demonstrate that Cat-AIR achieves state-of-the-art results across a wide range of restoration tasks, requiring fewer FLOPs than previous methods.
arXiv Detail & Related papers (2025-03-23T03:25:52Z) - ConStyle v2: A Strong Prompter for All-in-One Image Restoration [5.693207891187567]
This paper introduces ConStyle v2, a strong plug-and-play prompter for U-Net Image Restoration models.
Experiments show that ConStyle v2 can enhance any U-Net style Image Restoration models to all-in-one Image Restoration models.
arXiv Detail & Related papers (2024-06-26T10:46:44Z) - Unified-Width Adaptive Dynamic Network for All-In-One Image Restoration [50.81374327480445]
We introduce a novel concept positing that intricate image degradation can be represented in terms of elementary degradation.
We propose the Unified-Width Adaptive Dynamic Network (U-WADN), consisting of two pivotal components: a Width Adaptive Backbone (WAB) and a Width Selector (WS)
The proposed U-WADN achieves better performance while simultaneously reducing up to 32.3% of FLOPs and providing approximately 15.7% real-time acceleration.
arXiv Detail & Related papers (2024-01-24T04:25:12Z) - SPIRE: Semantic Prompt-Driven Image Restoration [66.26165625929747]
We develop SPIRE, a Semantic and restoration Prompt-driven Image Restoration framework.
Our approach is the first framework that supports fine-level instruction through language-based quantitative specification of the restoration strength.
Our experiments demonstrate the superior restoration performance of SPIRE compared to the state of the arts.
arXiv Detail & Related papers (2023-12-18T17:02:30Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - Learning from History: Task-agnostic Model Contrastive Learning for
Image Restoration [79.04007257606862]
This paper introduces an innovative method termed 'learning from history', which dynamically generates negative samples from the target model itself.
Our approach, named Model Contrastive Learning for Image Restoration (MCLIR), rejuvenates latency models as negative models, making it compatible with diverse image restoration tasks.
arXiv Detail & Related papers (2023-09-12T07:50:54Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - PromptIR: Prompting for All-in-One Blind Image Restoration [64.02374293256001]
We present a prompt-based learning approach, PromptIR, for All-In-One image restoration.
Our method uses prompts to encode degradation-specific information, which is then used to dynamically guide the restoration network.
PromptIR offers a generic and efficient plugin module with few lightweight prompts.
arXiv Detail & Related papers (2023-06-22T17:59:52Z) - ClassPruning: Speed Up Image Restoration Networks by Dynamic N:M Pruning [25.371802581339576]
ClassPruning can help existing methods save approximately 40% FLOPs while maintaining performance.
We propose a novel training strategy along with two additional loss terms to stabilize training and improve performance.
arXiv Detail & Related papers (2022-11-10T11:14:15Z) - SwinIR: Image Restoration Using Swin Transformer [124.8794221439392]
We propose a strong baseline model SwinIR for image restoration based on the Swin Transformer.
SwinIR consists of three parts: shallow feature extraction, deep feature extraction and high-quality image reconstruction.
We conduct experiments on three representative tasks: image super-resolution, image denoising and JPEG compression artifact reduction.
arXiv Detail & Related papers (2021-08-23T15:55:32Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.