Defocus Blur Detection via Depth Distillation
- URL: http://arxiv.org/abs/2007.08113v1
- Date: Thu, 16 Jul 2020 04:58:09 GMT
- Title: Defocus Blur Detection via Depth Distillation
- Authors: Xiaodong Cun and Chi-Man Pun
- Abstract summary: We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
- Score: 64.78779830554731
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defocus Blur Detection(DBD) aims to separate in-focus and out-of-focus
regions from a single image pixel-wisely. This task has been paid much
attention since bokeh effects are widely used in digital cameras and smartphone
photography. However, identifying obscure homogeneous regions and borderline
transitions in partially defocus images is still challenging. To solve these
problems, we introduce depth information into DBD for the first time. When the
camera parameters are fixed, we argue that the accuracy of DBD is highly
related to scene depth. Hence, we consider the depth information as the
approximate soft label of DBD and propose a joint learning framework inspired
by knowledge distillation. In detail, we learn the defocus blur from ground
truth and the depth distilled from a well-trained depth estimation network at
the same time. Thus, the sharp region will provide a strong prior for depth
estimation while the blur detection also gains benefits from the distilled
depth. Besides, we propose a novel decoder in the fully convolutional
network(FCN) as our network structure. In each level of the decoder, we design
the Selective Reception Field Block(SRFB) for merging multi-scale features
efficiently and reuse the side outputs as Supervision-guided Attention
Block(SAB). Unlike previous methods, the proposed decoder builds reception
field pyramids and emphasizes salient regions simply and efficiently.
Experiments show that our approach outperforms 11 other state-of-the-art
methods on two popular datasets. Our method also runs at over 30 fps on a
single GPU, which is 2x faster than previous works. The code is available at:
https://github.com/vinthony/depth-distillation
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Depth and DOF Cues Make A Better Defocus Blur Detector [27.33757097343283]
Defocus blur detection (DBD) separates in-focus and out-of-focus regions in an image.
Previous approaches mistakenly mistook homogeneous areas in focus for defocus blur regions.
We propose an approach called D-DFFNet, which incorporates depth and DOF cues in an implicit manner.
arXiv Detail & Related papers (2023-06-20T07:03:37Z) - Depth Estimation and Image Restoration by Deep Learning from Defocused
Images [2.6599014990168834]
Two-headed Depth Estimation and Deblurring Network (2HDED:NET) extends a conventional Depth from Defocus (DFD) networks with a deblurring branch that shares the same encoder as the depth branch.
The proposed method has been successfully tested on two benchmarks, one for indoor and the other for outdoor scenes: NYU-v2 and Make3D.
arXiv Detail & Related papers (2023-02-21T15:28:42Z) - Learning Depth from Focus in the Wild [16.27391171541217]
We present a convolutional neural network-based depth estimation from single focal stacks.
Our method allows depth maps to be inferred in an end-to-end manner even with image alignment.
For the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras.
arXiv Detail & Related papers (2022-07-20T05:23:29Z) - Learning Dual-Pixel Alignment for Defocus Deblurring [73.80328094662976]
We propose a Dual-Pixel Alignment Network (DPANet) for defocus deblurring.
It is notably superior to state-of-the-art deblurring methods in reducing defocus blur while recovering visually plausible sharp structures and textures.
arXiv Detail & Related papers (2022-04-26T07:02:58Z) - Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus
Supervision [10.547816678110417]
The proposed method can be trained either supervisedly with ground truth depth, or emphunsupervisedly with AiF images as supervisory signals.
We show in various experiments that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-08-24T17:09:13Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - BaMBNet: A Blur-aware Multi-branch Network for Defocus Deblurring [74.34263243089688]
convolutional neural networks (CNNs) have been introduced to the defocus deblurring problem and achieved significant progress.
This study designs a novel blur-aware multi-branch network (BaMBNet) in which different regions (with different blur amounts) should be treated differentially.
Both quantitative and qualitative experiments demonstrate that our BaMBNet outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-05-31T07:55:30Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.