Spatially-Adaptive Image Restoration using Distortion-Guided Networks
- URL: http://arxiv.org/abs/2108.08617v1
- Date: Thu, 19 Aug 2021 11:02:25 GMT
- Title: Spatially-Adaptive Image Restoration using Distortion-Guided Networks
- Authors: Kuldeep Purohit, Maitreya Suin, A. N. Rajagopalan, Vishnu Naresh
Boddeti
- Abstract summary: We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
- Score: 51.89245800461537
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present a general learning-based solution for restoring images suffering
from spatially-varying degradations. Prior approaches are typically
degradation-specific and employ the same processing across different images and
different pixels within. However, we hypothesize that such spatially rigid
processing is suboptimal for simultaneously restoring the degraded pixels as
well as reconstructing the clean regions of the image. To overcome this
limitation, we propose SPAIR, a network design that harnesses
distortion-localization information and dynamically adjusts computation to
difficult regions in the image. SPAIR comprises of two components, (1) a
localization network that identifies degraded pixels, and (2) a restoration
network that exploits knowledge from the localization network in filter and
feature domain to selectively and adaptively restore degraded pixels. Our key
idea is to exploit the non-uniformity of heavy degradations in spatial-domain
and suitably embed this knowledge within distortion-guided modules performing
sparse normalization, feature extraction and attention. Our architecture is
agnostic to physical formation model and generalizes across several types of
spatially-varying degradations. We demonstrate the efficacy of SPAIR
individually on four restoration tasks-removal of rain-streaks, raindrops,
shadows and motion blur. Extensive qualitative and quantitative comparisons
with prior art on 11 benchmark datasets demonstrate that our
degradation-agnostic network design offers significant performance gains over
state-of-the-art degradation-specific architectures. Code available at
https://github.com/human-analysis/spatially-adaptive-image-restoration.
Related papers
- Realistic Extreme Image Rescaling via Generative Latent Space Learning [51.85790402171696]
We propose a novel framework called Latent Space Based Image Rescaling (LSBIR) for extreme image rescaling tasks.
LSBIR effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model to generate realistic HR images.
In the first stage, a pseudo-invertible encoder-decoder models the bidirectional mapping between the latent features of the HR image and the target-sized LR image.
In the second stage, the reconstructed features from the first stage are refined by a pre-trained diffusion model to generate more faithful and visually pleasing details.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Research on Image Super-Resolution Reconstruction Mechanism based on Convolutional Neural Network [8.739451985459638]
Super-resolution algorithms transform one or more sets of low-resolution images captured from the same scene into high-resolution images.
The extraction of image features and nonlinear mapping methods in the reconstruction process remain challenging for existing algorithms.
The objective is to recover high-quality, high-resolution images from low-resolution images.
arXiv Detail & Related papers (2024-07-18T06:50:39Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Wide & deep learning for spatial & intensity adaptive image restoration [16.340992967330603]
We propose an ingenious and efficient multi-frame image restoration network (DparNet) with wide & deep architecture.
The degradation prior is directly learned from degraded images in form of key degradation parameter matrix.
The wide & deep architecture in DparNet enables the learned parameters to directly modulate the final restoring results.
arXiv Detail & Related papers (2023-05-30T03:24:09Z) - Learning Enriched Features for Fast Image Restoration and Enhancement [166.17296369600774]
This paper presents a holistic goal of maintaining spatially-precise high-resolution representations through the entire network.
We learn an enriched set of features that combines contextual information from multiple scales, while simultaneously preserving the high-resolution spatial details.
Our approach achieves state-of-the-art results for a variety of image processing tasks, including defocus deblurring, image denoising, super-resolution, and image enhancement.
arXiv Detail & Related papers (2022-04-19T17:59:45Z) - Dual Perceptual Loss for Single Image Super-Resolution Using ESRGAN [13.335546116599494]
This paper proposes a method called Dual Perceptual Loss (DP Loss) to replace the original perceptual loss to solve the problem of single image super-resolution reconstruction.
Due to the complementary property between the VGG features and the ResNet features, the proposed DP Loss considers the advantages of learning two features simultaneously.
The qualitative and quantitative analysis on benchmark datasets demonstrates the superiority of our proposed method over state-of-the-art super-resolution methods.
arXiv Detail & Related papers (2022-01-17T12:42:56Z) - Gated Fusion Network for Degraded Image Super Resolution [78.67168802945069]
We propose a dual-branch convolutional neural network to extract base features and recovered features separately.
By decomposing the feature extraction step into two task-independent streams, the dual-branch model can facilitate the training process.
arXiv Detail & Related papers (2020-03-02T13:28:32Z) - Image Fine-grained Inpainting [89.17316318927621]
We present a one-stage model that utilizes dense combinations of dilated convolutions to obtain larger and more effective receptive fields.
To better train this efficient generator, except for frequently-used VGG feature matching loss, we design a novel self-guided regression loss.
We also employ a discriminator with local and global branches to ensure local-global contents consistency.
arXiv Detail & Related papers (2020-02-07T03:45:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.