Learning Single Image Defocus Deblurring with Misaligned Training Pairs
- URL: http://arxiv.org/abs/2211.14502v2
- Date: Tue, 29 Nov 2022 09:19:19 GMT
- Title: Learning Single Image Defocus Deblurring with Misaligned Training Pairs
- Authors: Yu Li, Dongwei Ren, Xinya Shu, Wangmeng Zuo
- Abstract summary: We propose a joint deblurring and reblurring learning framework for single image defocus deblurring.
Our framework can be applied to boost defocus deblurring networks in terms of both quantitative metrics and visual quality.
- Score: 80.13320797431487
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By adopting popular pixel-wise loss, existing methods for defocus deblurring
heavily rely on well aligned training image pairs. Although training pairs of
ground-truth and blurry images are carefully collected, e.g., DPDD dataset,
misalignment is inevitable between training pairs, making existing methods
possibly suffer from deformation artifacts. In this paper, we propose a joint
deblurring and reblurring learning (JDRL) framework for single image defocus
deblurring with misaligned training pairs. Generally, JDRL consists of a
deblurring module and a spatially invariant reblurring module, by which
deblurred result can be adaptively supervised by ground-truth image to recover
sharp textures while maintaining spatial consistency with the blurry image.
First, in the deblurring module, a bi-directional optical flow-based
deformation is introduced to tolerate spatial misalignment between deblurred
and ground-truth images. Second, in the reblurring module, deblurred result is
reblurred to be spatially aligned with blurry image, by predicting a set of
isotropic blur kernels and weighting maps. Moreover, we establish a new single
image defocus deblurring (SDD) dataset, further validating our JDRL and also
benefiting future research. Our JDRL can be applied to boost defocus deblurring
networks in terms of both quantitative metrics and visual quality on DPDD,
RealDOF and our SDD datasets.
Related papers
- Reblurring-Guided Single Image Defocus Deblurring: A Learning Framework with Misaligned Training Pairs [65.25002116216771]
We introduce a reblurring-guided learning framework for single image defocus deblurring.
Our reblurring module ensures spatial consistency between the deblurred image, the reblurred image and the input blurry image.
We have collected a new dataset specifically for single image defocus deblurring with typical misalignments.
arXiv Detail & Related papers (2024-09-26T12:37:50Z) - Generating Aligned Pseudo-Supervision from Non-Aligned Data for Image
Restoration in Under-Display Camera [84.41316720913785]
We revisit the classic stereo setup for training data collection -- capturing two images of the same scene with one UDC and one standard camera.
The key idea is to "copy" details from a high-quality reference image and "paste" them on the UDC image.
A novel Transformer-based framework generates well-aligned yet high-quality target data for the corresponding UDC input.
arXiv Detail & Related papers (2023-04-12T17:56:42Z) - Boosting Few-shot Fine-grained Recognition with Background Suppression
and Foreground Alignment [53.401889855278704]
Few-shot fine-grained recognition (FS-FGR) aims to recognize novel fine-grained categories with the help of limited available samples.
We propose a two-stage background suppression and foreground alignment framework, which is composed of a background activation suppression (BAS) module, a foreground object alignment (FOA) module, and a local to local (L2L) similarity metric.
Experiments conducted on multiple popular fine-grained benchmarks demonstrate that our method outperforms the existing state-of-the-art by a large margin.
arXiv Detail & Related papers (2022-10-04T07:54:40Z) - SVBR-NET: A Non-Blind Spatially Varying Defocus Blur Removal Network [2.4975981795360847]
We propose a non-blind approach for image deblurring that can deal with spatially-varying kernels.
We introduce two encoder-decoder sub-networks that are fed with the blurry image and the estimated blur map.
The network is trained with synthetically blur kernels that are augmented to emulate blur maps produced by existing blur estimation methods.
arXiv Detail & Related papers (2022-06-26T17:21:12Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - A Differentiable Two-stage Alignment Scheme for Burst Image
Reconstruction with Large Shift [13.454711511086261]
Joint denoising and demosaicking (JDD) for burst images, namely JDD-B, has attracted much attention.
One key challenge of JDD-B lies in the robust alignment of image frames.
We propose a differentiable two-stage alignment scheme sequentially in patch and pixel level for effective JDD-B.
arXiv Detail & Related papers (2022-03-17T12:55:45Z) - Light Field Reconstruction via Deep Adaptive Fusion of Hybrid Lenses [67.01164492518481]
This paper explores the problem of reconstructing high-resolution light field (LF) images from hybrid lenses.
We propose a novel end-to-end learning-based approach, which can comprehensively utilize the specific characteristics of the input.
Our framework could potentially decrease the cost of high-resolution LF data acquisition and benefit LF data storage and transmission.
arXiv Detail & Related papers (2021-02-14T06:44:47Z) - Dual Pixel Exploration: Simultaneous Depth Estimation and Image
Restoration [77.1056200937214]
We study the formation of the DP pair which links the blur and the depth information.
We propose an end-to-end DDDNet (DP-based Depth and De Network) to jointly estimate the depth and restore the image.
arXiv Detail & Related papers (2020-12-01T06:53:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.