Depth and DOF Cues Make A Better Defocus Blur Detector
- URL: http://arxiv.org/abs/2306.11334v1
- Date: Tue, 20 Jun 2023 07:03:37 GMT
- Title: Depth and DOF Cues Make A Better Defocus Blur Detector
- Authors: Yuxin Jin, Ming Qian, Jincheng Xiong, Nan Xue, Gui-Song Xia
- Abstract summary: Defocus blur detection (DBD) separates in-focus and out-of-focus regions in an image.
Previous approaches mistakenly mistook homogeneous areas in focus for defocus blur regions.
We propose an approach called D-DFFNet, which incorporates depth and DOF cues in an implicit manner.
- Score: 27.33757097343283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defocus blur detection (DBD) separates in-focus and out-of-focus regions in
an image. Previous approaches mistakenly mistook homogeneous areas in focus for
defocus blur regions, likely due to not considering the internal factors that
cause defocus blur. Inspired by the law of depth, depth of field (DOF), and
defocus, we propose an approach called D-DFFNet, which incorporates depth and
DOF cues in an implicit manner. This allows the model to understand the defocus
phenomenon in a more natural way. Our method proposes a depth feature
distillation strategy to obtain depth knowledge from a pre-trained monocular
depth estimation model and uses a DOF-edge loss to understand the relationship
between DOF and depth. Our approach outperforms state-of-the-art methods on
public benchmarks and a newly collected large benchmark dataset, EBD. Source
codes and EBD dataset are available at: https:github.com/yuxinjin-whu/D-DFFNet.
Related papers
- Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Deep Depth from Focal Stack with Defocus Model for Camera-Setting
Invariance [19.460887007137607]
We propose a learning-based depth from focus/defocus (DFF) which takes a focal stack as input for estimating scene depth.
We show that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.
arXiv Detail & Related papers (2022-02-26T04:21:08Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Deep Depth from Focus with Differential Focus Volume [17.505649653615123]
We propose a convolutional neural network (CNN) to find the best-focused pixels in a focal stack and infer depth from the focus estimation.
The key innovation of the network is the novel deep differential focus volume (DFV)
arXiv Detail & Related papers (2021-12-03T04:49:51Z) - Bridging Unsupervised and Supervised Depth from Focus via All-in-Focus
Supervision [10.547816678110417]
The proposed method can be trained either supervisedly with ground truth depth, or emphunsupervisedly with AiF images as supervisory signals.
We show in various experiments that our method outperforms the state-of-the-art methods both quantitatively and qualitatively.
arXiv Detail & Related papers (2021-08-24T17:09:13Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Deep Multi-Scale Feature Learning for Defocus Blur Estimation [10.455763145066168]
This paper presents an edge-based defocus blur estimation method from a single defocused image.
We first distinguish edges that lie at depth discontinuities (called depth edges, for which the blur estimate is ambiguous) from edges that lie at approximately constant depth regions (called pattern edges, for which the blur estimate is well-defined).
We estimate the defocus blur amount at pattern edges only, and explore an scheme based on guided filters that prevents data propagation across the detected depth edges to obtain a dense blur map with well-defined object boundaries.
arXiv Detail & Related papers (2020-09-24T20:36:40Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.