Clean Images are Hard to Reblur: A New Clue for Deblurring
- URL: http://arxiv.org/abs/2104.12665v1
- Date: Mon, 26 Apr 2021 15:49:21 GMT
- Title: Clean Images are Hard to Reblur: A New Clue for Deblurring
- Authors: Seungjun Nah, Sanghyun Son, Jaerin Lee, Kyoung Mu Lee
- Abstract summary: We propose a novel low-level perceptual loss to make image sharper.
To better focus on image blurriness, we train a reblurring module amplifying the unremoved motion blur.
The supervised reblurring loss at training stage compares the amplified blur between the deblurred image and the reference sharp image.
The self-blurring loss at inference stage inspects if the deblurred image still contains noticeable blur to be amplified.
- Score: 56.28655168605079
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of dynamic scene deblurring is to remove the motion blur present in
a given image. Most learning-based approaches implement their solutions by
minimizing the L1 or L2 distance between the output and reference sharp image.
Recent attempts improve the perceptual quality of the deblurred image by using
features learned from visual recognition tasks. However, those features are
originally designed to capture the high-level contexts rather than the
low-level structures of the given image, such as blurriness. We propose a novel
low-level perceptual loss to make image sharper. To better focus on image
blurriness, we train a reblurring module amplifying the unremoved motion blur.
Motivated that a well-deblurred clean image should contain zero-magnitude
motion blur that is hard to be amplified, we design two types of reblurring
loss functions. The supervised reblurring loss at training stage compares the
amplified blur between the deblurred image and the reference sharp image. The
self-supervised reblurring loss at inference stage inspects if the deblurred
image still contains noticeable blur to be amplified. Our experimental results
demonstrate the proposed reblurring losses improve the perceptual quality of
the deblurred images in terms of NIQE and LPIPS scores as well as visual
sharpness.
Related papers
- Blur2Blur: Blur Conversion for Unsupervised Image Deblurring on Unknown Domains [19.573629029170128]
This paper presents an innovative framework designed to train an image deblurring algorithm tailored to a specific camera device.
It works by transforming a blurry input image, which is challenging to deblur, into another blurry image that is more amenable to deblurring.
arXiv Detail & Related papers (2024-03-24T15:58:48Z) - ID-Blau: Image Deblurring by Implicit Diffusion-based reBLurring AUgmentation [45.582704677784825]
Implicit Diffusion-based reBLurring AUgmentation (ID-Blau) is proposed to generate diverse blurred images by simulating motion trajectories in a continuous space.
By sampling diverse blur conditions, ID-Blau can generate various blurred images unseen in the training set.
Results demonstrate that ID-Blau can produce realistic blurred images for training and thus significantly improve performance for state-of-the-art deblurring models.
arXiv Detail & Related papers (2023-12-18T07:47:43Z) - Reference-based Motion Blur Removal: Learning to Utilize Sharpness in
the Reference Image [29.52731707976695]
A typical setting is deburring an image using a nearby sharp image in a video sequence.
This paper proposes a better method to use the information present in a reference image.
Our method can be integrated into pre-existing networks designed for single image deblurring.
arXiv Detail & Related papers (2023-07-06T09:24:55Z) - Adaptive Window Pruning for Efficient Local Motion Deblurring [81.35217764881048]
Local motion blur commonly occurs in real-world photography due to the mixing between moving objects and stationary backgrounds during exposure.
Existing image deblurring methods predominantly focus on global deblurring.
This paper aims to adaptively and efficiently restore high-resolution locally blurred images.
arXiv Detail & Related papers (2023-06-25T15:24:00Z) - Invertible Network for Unpaired Low-light Image Enhancement [78.33382003460903]
We propose to leverage the invertible network to enhance low-light image in forward process and degrade the normal-light one inversely with unpaired learning.
In addition to the adversarial loss, we design various loss functions to ensure the stability of training and preserve more image details.
We present a progressive self-guided enhancement process for low-light images and achieve favorable performance against the SOTAs.
arXiv Detail & Related papers (2021-12-24T17:00:54Z) - Contrastive Feature Loss for Image Prediction [55.373404869092866]
Training supervised image synthesis models requires a critic to compare two images: the ground truth to the result.
We introduce an information theory based approach to measuring similarity between two images.
We show that our formulation boosts the perceptual realism of output images when used as a drop-in replacement for the L1 loss.
arXiv Detail & Related papers (2021-11-12T20:39:52Z) - Deblurring by Realistic Blurring [110.54173799114785]
We propose a new method which combines two GAN models, i.e., a learning-to-blurr GAN (BGAN) and learning-to-DeBlur GAN (DBGAN)
The first model, BGAN, learns how to blur sharp images with unpaired sharp and blurry image sets, and then guides the second model, DBGAN, to learn how to correctly deblur such images.
As an additional contribution, this paper also introduces a Real-World Blurred Image (RWBI) dataset including diverse blurry images.
arXiv Detail & Related papers (2020-04-04T05:25:15Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.