Wide & deep learning for spatial & intensity adaptive image restoration
- URL: http://arxiv.org/abs/2305.18708v1
- Date: Tue, 30 May 2023 03:24:09 GMT
- Title: Wide & deep learning for spatial & intensity adaptive image restoration
- Authors: Yadong Wang and Xiangzhi Bai
- Abstract summary: We propose an ingenious and efficient multi-frame image restoration network (DparNet) with wide & deep architecture.
The degradation prior is directly learned from degraded images in form of key degradation parameter matrix.
The wide & deep architecture in DparNet enables the learned parameters to directly modulate the final restoring results.
- Score: 16.340992967330603
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Most existing deep learning-based image restoration methods usually aim to
remove degradation with uniform spatial distribution and constant intensity,
making insufficient use of degradation prior knowledge. Here we bootstrap the
deep neural networks to suppress complex image degradation whose intensity is
spatially variable, through utilizing prior knowledge from degraded images.
Specifically, we propose an ingenious and efficient multi-frame image
restoration network (DparNet) with wide & deep architecture, which integrates
degraded images and prior knowledge of degradation to reconstruct images with
ideal clarity and stability. The degradation prior is directly learned from
degraded images in form of key degradation parameter matrix, with no
requirement of any off-site knowledge. The wide & deep architecture in DparNet
enables the learned parameters to directly modulate the final restoring
results, boosting spatial & intensity adaptive image restoration. We
demonstrate the proposed method on two representative image restoration
applications: image denoising and suppression of atmospheric turbulence effects
in images. Two large datasets, containing 109,536 and 49,744 images
respectively, were constructed to support our experiments. The experimental
results show that our DparNet significantly outperform SoTA methods in
restoration performance and network efficiency. More importantly, by utilizing
the learned degradation parameters via wide & deep learning, we can improve the
PSNR of image restoration by 0.6~1.1 dB with less than 2% increasing in model
parameter numbers and computational complexity. Our work suggests that degraded
images may hide key information of the degradation process, which can be
utilized to boost spatial & intensity adaptive image restoration.
Related papers
- Multi-Scale Representation Learning for Image Restoration with State-Space Model [13.622411683295686]
We propose a novel Multi-Scale State-Space Model-based (MS-Mamba) for efficient image restoration.
Our proposed method achieves new state-of-the-art performance while maintaining low computational complexity.
arXiv Detail & Related papers (2024-08-19T16:42:58Z) - Realistic Extreme Image Rescaling via Generative Latent Space Learning [51.85790402171696]
We propose a novel framework called Latent Space Based Image Rescaling (LSBIR) for extreme image rescaling tasks.
LSBIR effectively leverages powerful natural image priors learned by a pre-trained text-to-image diffusion model to generate realistic HR images.
In the first stage, a pseudo-invertible encoder-decoder models the bidirectional mapping between the latent features of the HR image and the target-sized LR image.
In the second stage, the reconstructed features from the first stage are refined by a pre-trained diffusion model to generate more faithful and visually pleasing details.
arXiv Detail & Related papers (2024-08-17T09:51:42Z) - Diff-Restorer: Unleashing Visual Prompts for Diffusion-based Universal Image Restoration [19.87693298262894]
We propose Diff-Restorer, a universal image restoration method based on the diffusion model.
We utilize the pre-trained visual language model to extract visual prompts from degraded images.
We also design a Degradation-aware Decoder to perform structural correction and convert the latent code to the pixel domain.
arXiv Detail & Related papers (2024-07-04T05:01:10Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - Is Deep Image Prior in Need of a Good Education? [57.3399060347311]
Deep image prior was introduced as an effective prior for image reconstruction.
Despite its impressive reconstructive properties, the approach is slow when compared to learned or traditional reconstruction techniques.
We develop a two-stage learning paradigm to address the computational challenge.
arXiv Detail & Related papers (2021-11-23T15:08:26Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Deep Amended Gradient Descent for Efficient Spectral Reconstruction from
Single RGB Images [42.26124628784883]
We propose a compact, efficient, and end-to-end learning-based framework, namely AGD-Net.
We first formulate the problem explicitly based on the classic gradient descent algorithm.
AGD-Net can improve the reconstruction quality by more than 1.0 dB on average.
arXiv Detail & Related papers (2021-08-12T05:54:09Z) - Underwater Image Restoration via Contrastive Learning and a Real-world
Dataset [59.35766392100753]
We present a novel method for underwater image restoration based on unsupervised image-to-image translation framework.
Our proposed method leveraged contrastive learning and generative adversarial networks to maximize the mutual information between raw and restored images.
arXiv Detail & Related papers (2021-06-20T16:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.