Adjust Your Focus: Defocus Deblurring From Dual-Pixel Images Using Explicit Multi-Scale Cross-Correlation
- URL: http://arxiv.org/abs/2502.11002v1
- Date: Sun, 16 Feb 2025 05:55:57 GMT
- Title: Adjust Your Focus: Defocus Deblurring From Dual-Pixel Images Using Explicit Multi-Scale Cross-Correlation
- Authors: Kunal Swami,
- Abstract summary: Defocus blur is a common problem in photography.
Recent work exploited dual-pixel (DP) image information to solve the problem.
We propose an explicit cross-correlation between the two DP views to guide the network for appropriate deblurring.
- Score: 1.661922907889139
- License:
- Abstract: Defocus blur is a common problem in photography. It arises when an image is captured with a wide aperture, resulting in a shallow depth of field. Sometimes it is desired, e.g., in portrait effect. Otherwise, it is a problem from both an aesthetic point of view and downstream computer vision tasks, such as segmentation and depth estimation. Defocusing an out-of-focus image to obtain an all-in-focus image is a highly challenging and often ill-posed problem. A recent work exploited dual-pixel (DP) image information, widely available in consumer DSLRs and high-end smartphones, to solve the problem of defocus deblurring. DP sensors result in two sub-aperture views containing defocus disparity cues. A given pixel's disparity is directly proportional to the distance from the focal plane. However, the existing methods adopt a na\"ive approach of a channel-wise concatenation of the two DP views without explicitly utilizing the disparity cues within the network. In this work, we propose to perform an explicit cross-correlation between the two DP views to guide the network for appropriate deblurring in different image regions. We adopt multi-scale cross-correlation to handle blur and disparities at different scales. Quantitative and qualitative evaluation of our multi-scale cross-correlation network (MCCNet) reveals that it achieves better defocus deblurring than existing state-of-the-art methods despite having lesser computational complexity.
Related papers
- Learning Dual-Pixel Alignment for Defocus Deblurring [73.80328094662976]
We propose a Dual-Pixel Alignment Network (DPANet) for defocus deblurring.
It is notably superior to state-of-the-art deblurring methods in reducing defocus blur while recovering visually plausible sharp structures and textures.
arXiv Detail & Related papers (2022-04-26T07:02:58Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Improving Single-Image Defocus Deblurring: How Dual-Pixel Images Help
Through Multi-Task Learning [48.063176079878055]
We propose a single-image deblurring network that incorporates the two sub-aperture views into a multi-task framework.
Our experiments show this multi-task strategy achieves +1dB PSNR improvement over state-of-the-art defocus deblurring methods.
These high-quality DP views can be used for other DP-based applications, such as reflection removal.
arXiv Detail & Related papers (2021-08-11T14:45:15Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - BaMBNet: A Blur-aware Multi-branch Network for Defocus Deblurring [74.34263243089688]
convolutional neural networks (CNNs) have been introduced to the defocus deblurring problem and achieved significant progress.
This study designs a novel blur-aware multi-branch network (BaMBNet) in which different regions (with different blur amounts) should be treated differentially.
Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-05-31T07:55:30Z) - Geometric Scene Refocusing [9.198471344145092]
We study the fine characteristics of images with a shallow depth-of-field in the context of focal stacks.
We identify in-focus pixels, dual-focus pixels, pixels that exhibit bokeh and spatially-varying blur kernels between focal slices.
We present a comprehensive algorithm for post-capture refocusing in a geometrically correct manner.
arXiv Detail & Related papers (2020-12-20T06:33:55Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.