Learning Dual-Pixel Alignment for Defocus Deblurring
- URL: http://arxiv.org/abs/2204.12105v1
- Date: Tue, 26 Apr 2022 07:02:58 GMT
- Title: Learning Dual-Pixel Alignment for Defocus Deblurring
- Authors: Yu Li, Yaling Yi, Dongwei Ren, Qince Li, Wangmeng Zuo
- Abstract summary: We propose a Dual-Pixel Alignment Network (DPANet) for defocus deblurring.
It is notably superior to state-of-the-art deblurring methods in reducing defocus blur while recovering visually plausible sharp structures and textures.
- Score: 73.80328094662976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is a challenging task to recover all-in-focus image from a single defocus
blurry image in real-world applications. On many modern cameras, dual-pixel
(DP) sensors create two-image views, based on which stereo information can be
exploited to benefit defocus deblurring. Despite existing DP defocus deblurring
methods achieving impressive results, they directly take naive concatenation of
DP views as input, while neglecting the disparity between left and right views
in the regions out of camera's depth of field (DoF). In this work, we propose a
Dual-Pixel Alignment Network (DPANet) for defocus deblurring. Generally, DPANet
is an encoder-decoder with skip-connections, where two branches with shared
parameters in the encoder are employed to extract and align deep features from
left and right views, and one decoder is adopted to fuse aligned features for
predicting the all-in-focus image. Due to that DP views suffer from different
blur amounts, it is not trivial to align left and right views. To this end, we
propose novel encoder alignment module (EAM) and decoder alignment module
(DAM). In particular, a correlation layer is suggested in EAM to measure the
disparity between DP views, whose deep features can then be accordingly aligned
using deformable convolutions. And DAM can further enhance the alignment of
skip-connected features from encoder and deep features in decoder. By
introducing several EAMs and DAMs, DP views in DPANet can be well aligned for
better predicting latent all-in-focus image. Experimental results on real-world
datasets show that our DPANet is notably superior to state-of-the-art
deblurring methods in reducing defocus blur while recovering visually plausible
sharp structures and textures.
Related papers
- Passive Snapshot Coded Aperture Dual-Pixel RGB-D Imaging [25.851398356458425]
Single-shot 3D sensing is useful in many application areas such as microscopy, medical imaging, surgical navigation, and autonomous driving.
We propose CADS (Coded Aperture Dual-Pixel Sensing), in which we use a coded aperture in the imaging lens along with a DP sensor.
Our resulting CADS imaging system demonstrates improvement of >1.5dB PSNR in all-in-focus (AIF) estimates and 5-6% in depth estimation quality over naive DP sensing.
arXiv Detail & Related papers (2024-02-28T06:45:47Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help
Through Multi-Task Learning [48.063176079878055]
We propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework.
Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods.
These high-quality DP views can be used for other DP-based applications, such as reflection removal.
arXiv Detail & Related papers (2021-08-11T14:45:15Z) - EPMF: Efficient Perception-aware Multi-sensor Fusion for 3D Semantic Segmentation [62.210091681352914]
We study multi-sensor fusion for 3D semantic segmentation for many applications, such as autonomous driving and robotics.
In this work, we investigate a collaborative fusion scheme called perception-aware multi-sensor fusion (PMF)
We propose a two-stream network to extract features from the two modalities separately. The extracted features are fused by effective residual-based fusion modules.
arXiv Detail & Related papers (2021-06-21T10:47:26Z) - Dual Pixel Exploration: Simultaneous Depth Estimation and Image
Restoration [77.1056200937214]
We study the formation of the DP pair which links the blur and the depth information.
We propose an end-to-end DDDNet (DP-based Depth and De Network) to jointly estimate the depth and restore the image.
arXiv Detail & Related papers (2020-12-01T06:53:57Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.