Learning Degradation Representations for Image Deblurring
- URL: http://arxiv.org/abs/2208.05244v1
- Date: Wed, 10 Aug 2022 09:53:16 GMT
- Title: Learning Degradation Representations for Image Deblurring
- Authors: Dasong Li, Yi Zhang, Ka Chun Cheung, Xiaogang Wang, Hongwei Qin,
Hongsheng Li
- Abstract summary: We propose a framework to learn spatially adaptive degradation representations of blurry images.
A novel joint image reblurring and deblurring learning process is presented to improve the expressiveness of degradation representations.
Experiments on the GoPro and RealBlur datasets demonstrate that our proposed deblurring framework with the learned degradation representations outperforms state-of-the-art methods.
- Score: 37.80709422920307
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In various learning-based image restoration tasks, such as image denoising
and image super-resolution, the degradation representations were widely used to
model the degradation process and handle complicated degradation patterns.
However, they are less explored in learning-based image deblurring as blur
kernel estimation cannot perform well in real-world challenging cases. We argue
that it is particularly necessary for image deblurring to model degradation
representations since blurry patterns typically show much larger variations
than noisy patterns or high-frequency textures.In this paper, we propose a
framework to learn spatially adaptive degradation representations of blurry
images. A novel joint image reblurring and deblurring learning process is
presented to improve the expressiveness of degradation representations. To make
learned degradation representations effective in reblurring and deblurring, we
propose a Multi-Scale Degradation Injection Network (MSDI-Net) to integrate
them into the neural networks. With the integration, MSDI-Net can handle
various and complicated blurry patterns adaptively. Experiments on the GoPro
and RealBlur datasets demonstrate that our proposed deblurring framework with
the learned degradation representations outperforms state-of-the-art methods
with appealing improvements. The code is released at
https://github.com/dasongli1/Learning_degradation.
Related papers
- Multi-Scale Representation Learning for Image Restoration with State-Space Model [13.622411683295686]
We propose a novel Multi-Scale State-Space Model-based (MS-Mamba) for efficient image restoration.
Our proposed method achieves new state-of-the-art performance while maintaining low computational complexity.
arXiv Detail & Related papers (2024-08-19T16:42:58Z) - GAURA: Generalizable Approach for Unified Restoration and Rendering of Arbitrary Views [28.47730275628715]
We propose a generalizable neural rendering method that can perform high-fidelity novel view synthesis under several degradations.
Our method, GAURA, is learning-based and does not require any test-time scene-specific optimization.
arXiv Detail & Related papers (2024-07-11T06:44:37Z) - DGNet: Dynamic Gradient-Guided Network for Water-Related Optics Image
Enhancement [77.0360085530701]
Underwater image enhancement (UIE) is a challenging task due to the complex degradation caused by underwater environments.
Previous methods often idealize the degradation process, and neglect the impact of medium noise and object motion on the distribution of image features.
Our approach utilizes predicted images to dynamically update pseudo-labels, adding a dynamic gradient to optimize the network's gradient space.
arXiv Detail & Related papers (2023-12-12T06:07:21Z) - All-in-one Multi-degradation Image Restoration Network via Hierarchical
Degradation Representation [47.00239809958627]
We propose a novel All-in-one Multi-degradation Image Restoration Network (AMIRNet)
AMIRNet learns a degradation representation for unknown degraded images by progressively constructing a tree structure through clustering.
This tree-structured representation explicitly reflects the consistency and discrepancy of various distortions, providing a specific clue for image restoration.
arXiv Detail & Related papers (2023-08-06T04:51:41Z) - LLDiffusion: Learning Degradation Representations in Diffusion Models
for Low-Light Image Enhancement [118.83316133601319]
Current deep learning methods for low-light image enhancement (LLIE) typically rely on pixel-wise mapping learned from paired data.
We propose a degradation-aware learning scheme for LLIE using diffusion models, which effectively integrates degradation and image priors into the diffusion process.
arXiv Detail & Related papers (2023-07-27T07:22:51Z) - SDWNet: A Straight Dilated Network with Wavelet Transformation for Image
Deblurring [23.86692375792203]
Image deblurring is a computer vision problem that aims to recover a sharp image from a blurred image.
Our model uses dilated convolution to enable the obtainment of the large receptive field with high spatial resolution.
We propose a novel module using the wavelet transform, which effectively helps the network to recover clear high-frequency texture details.
arXiv Detail & Related papers (2021-10-12T07:58:10Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Learning Spatial and Spatio-Temporal Pixel Aggregations for Image and
Video Denoising [104.59305271099967]
We present a pixel aggregation network and learn the pixel sampling and averaging strategies for image denoising.
We develop a pixel aggregation network for video denoising to sample pixels across the spatial-temporal space.
Our method is able to solve the misalignment issues caused by large motion in dynamic scenes.
arXiv Detail & Related papers (2021-01-26T13:00:46Z) - Invertible Image Rescaling [118.2653765756915]
We develop an Invertible Rescaling Net (IRN) to produce visually-pleasing low-resolution images.
We capture the distribution of the lost information using a latent variable following a specified distribution in the downscaling process.
arXiv Detail & Related papers (2020-05-12T09:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.