Defocus Blur Detection via Salient Region Detection Prior
- URL: http://arxiv.org/abs/2011.09677v1
- Date: Thu, 19 Nov 2020 05:56:11 GMT
- Title: Defocus Blur Detection via Salient Region Detection Prior
- Authors: Ming Qian and Min Xia and Chunyi Sun and Zhiwei Wang and Liguo Weng
- Abstract summary: Defocus blur Detection aims to separate the out-of-focus and depth-of-field areas in photos.
We propose a novel network for defocus blur detection.
- Score: 11.5253648614748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Defocus blur always occurred in photos when people take photos by Digital
Single Lens Reflex Camera(DSLR), giving salient region and aesthetic pleasure.
Defocus blur Detection aims to separate the out-of-focus and depth-of-field
areas in photos, which is an important work in computer vision. Current works
for defocus blur detection mainly focus on the designing of networks, the
optimizing of the loss function, and the application of multi-stream strategy,
meanwhile, these works do not pay attention to the shortage of training data.
In this work, to address the above data-shortage problem, we turn to rethink
the relationship between two tasks: defocus blur detection and salient region
detection. In an image with bokeh effect, it is obvious that the salient region
and the depth-of-field area overlap in most cases. So we first train our
network on the salient region detection tasks, then transfer the pre-trained
model to the defocus blur detection tasks. Besides, we propose a novel network
for defocus blur detection. Experiments show that our transfer strategy works
well on many current models, and demonstrate the superiority of our network.
Related papers
- Learnable Blur Kernel for Single-Image Defocus Deblurring in the Wild [9.246199263116067]
We propose a novel defocus deblurring method that uses the guidance of the defocus map to implement image deblurring.
The proposed method consists of a learnable blur kernel to estimate the defocus map, and a single-image defocus deblurring generative adversarial network (DefocusGAN) for the first time.
arXiv Detail & Related papers (2022-11-25T10:47:19Z) - Single-image Defocus Deblurring by Integration of Defocus Map Prediction
Tracing the Inverse Problem Computation [25.438654895178686]
We propose a simple but effective network with spatial modulation based on the defocus map.
Experimental results show that our method can achieve better quantitative and qualitative evaluation performance than the existing state-of-the-art methods.
arXiv Detail & Related papers (2022-07-07T02:15:33Z) - Learning to Deblur using Light Field Generated and Real Defocus Images [4.926805108788465]
Defocus deblurring is a challenging task due to the spatially varying nature of defocus blur.
We propose a novel deep defocus deblurring network that leverages the strength and overcomes the shortcoming of light fields.
arXiv Detail & Related papers (2022-04-01T11:35:51Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Onfocus Detection: Identifying Individual-Camera Eye Contact from
Unconstrained Images [81.64699115587167]
Onfocus detection aims at identifying whether the focus of the individual captured by a camera is on the camera or not.
We build a large-scale onfocus detection dataset, named as the OnFocus Detection In the Wild (OFDIW)
We propose a novel end-to-end deep model, i.e., the eye-context interaction inferring network (ECIIN) for onfocus detection.
arXiv Detail & Related papers (2021-03-29T03:29:09Z) - Rethinking of the Image Salient Object Detection: Object-level Semantic
Saliency Re-ranking First, Pixel-wise Saliency Refinement Latter [62.26677215668959]
We propose a lightweight, weakly supervised deep network to coarsely locate semantically salient regions.
We then fuse multiple off-the-shelf deep models on these semantically salient regions as the pixel-wise saliency refinement.
Our method is simple yet effective, which is the first attempt to consider the salient object detection mainly as an object-level semantic re-ranking problem.
arXiv Detail & Related papers (2020-08-10T07:12:43Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rapid Whole Slide Imaging via Learning-based Two-shot Virtual
Autofocusing [57.90239401665367]
Whole slide imaging (WSI) is an emerging technology for digital pathology.
We propose the concept of textitvirtual autofocusing, which does not rely on mechanical adjustment to conduct refocusing.
arXiv Detail & Related papers (2020-03-14T13:40:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.