All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation
- URL: http://arxiv.org/abs/2308.03021v1
- Date: Sun, 6 Aug 2023 04:51:41 GMT
- Title: All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation
- Authors: Cheng Zhang, Yu Zhu, Qingsen Yan, Jinqiu Sun, Yanning Zhang
- Abstract summary: We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
- Score: 47.00239809958627
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The aim of image restoration is to recover high-quality images from distorted
ones. However, current methods usually focus on a single task (\emph{e.g.},
denoising, deblurring or super-resolution) which cannot address the needs of
real-world multi-task processing, especially on mobile devices. Thus,
developing an all-in-one method that can restore images from various unknown
distortions is a significant challenge. Previous works have employed
contrastive learning to learn the degradation representation from observed
images, but this often leads to representation drift caused by deficient
positive and negative pairs. To address this issue, we propose a novel
All-in-one Multi-degradation Image Restoration Network (AMIRNet) that can
effectively capture and utilize accurate degradation representation for image
restoration. AMIRNet learns a degradation representation for unknown degraded
images by progressively constructing a tree structure through clustering,
without any prior knowledge of degradation information. This tree-structured
representation explicitly reflects the consistency and discrepancy of various
distortions, providing a specific clue for image restoration. To further
enhance the performance of the image restoration network and overcome domain
gaps caused by unknown distortions, we design a feature transform block (FTB)
that aligns domains and refines features with the guidance of the degradation
representation. We conduct extensive experiments on multiple distorted
datasets, demonstrating the effectiveness of our method and its advantages over
state-of-the-art restoration methods both qualitatively and quantitatively.
Related papers
- Multi-Scale Representation Learning for Image Restoration with State-Space Model [13.622411683295686]
We propose a novel Multi-Scale State-Space Model-based (MS-Mamba) for efficient image restoration.
Our proposed method achieves new state-of-the-art performance while maintaining low computational complexity.
arXiv Detail & Related papers (2024-08-19T16:42:58Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - Prompt-based Ingredient-Oriented All-in-One Image Restoration [0.0]
We propose a novel data ingredient-oriented approach to tackle multiple image degradation tasks.
Specifically, we utilize a encoder to capture features and introduce prompts with degradation-specific information to guide the decoder.
Our method performs competitively to the state-of-the-art.
arXiv Detail & Related papers (2023-09-06T15:05:04Z) - Wide & deep learning for spatial & intensity adaptive image restoration [16.340992967330603]
We propose an ingenious and efficient multi-frame image restoration network (DparNet) with wide & deep architecture.
The degradation prior is directly learned from degraded images in form of key degradation parameter matrix.
The wide & deep architecture in DparNet enables the learned parameters to directly modulate the final restoring results.
arXiv Detail & Related papers (2023-05-30T03:24:09Z) - Invertible Rescaling Network and Its Extensions [118.72015270085535]
In this work, we propose a novel invertible framework to model the bidirectional degradation and restoration from a new perspective.
We develop invertible models to generate valid degraded images and transform the distribution of lost contents.
Then restoration is made tractable by applying the inverse transformation on the generated degraded image together with a randomly-drawn latent variable.
arXiv Detail & Related papers (2022-10-09T06:58:58Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN [13.335546116599494]
This paper proposes a method called Dual Perceptual Loss (DP Loss) to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction.
Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously.
The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
arXiv Detail & Related papers (2022-01-17T12:42:56Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z) - Learning Enriched Features for Real Image Restoration and Enhancement [166.17296369600774]
convolutional neural networks (CNNs) have achieved dramatic improvements over conventional approaches for image restoration task.
We present a novel architecture with the collective goals of maintaining spatially-precise high-resolution representations through the entire network.
Our approach learns an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
arXiv Detail & Related papers (2020-03-15T11:04:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.