Single image deep defocus estimation and its applications
- URL: http://arxiv.org/abs/2107.14443v1
- Date: Fri, 30 Jul 2021 06:18:16 GMT
- Title: Single image deep defocus estimation and its applications
- Authors: Fernando J. Galetto and Guang Deng
- Abstract summary: We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
- Score: 82.93345261434943
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The depth information is useful in many image processing applications.
However, since taking a picture is a process of projection of a 3D scene onto a
2D imaging sensor, the depth information is embedded in the image. Extracting
the depth information from the image is a challenging task. A guiding principle
is that the level of blurriness due to defocus is related to the distance
between the object and the focal plane. Based on this principle and the widely
used assumption that Gaussian blur is a good model for defocus blur, we
formulate the problem of estimating the spatially varying defocus blurriness as
a Gaussian blur classification problem. We solved the problem by training a
deep neural network to classify image patches into one of the 20 levels of
blurriness. We have created a dataset of more than 500000 image patches of size
32x32 which are used to train and test several well-known network models. We
find that MobileNetV2 is suitable for this application due to its low memory
requirement and high accuracy. The trained model is used to determine the patch
blurriness which is then refined by applying an iterative weighted guided
filter. The result is a defocus map that carries the information of the degree
of blurriness for each pixel. We compare the proposed method with
state-of-the-art techniques and we demonstrate its successful applications in
adaptive image enhancement, defocus magnification, and multi-focus image
fusion.
Related papers
- Depth Estimation Based on 3D Gaussian Splatting Siamese Defocus [14.354405484663285]
We propose a self-supervised framework based on 3D Gaussian splatting and Siamese networks for depth estimation in 3D geometry.
The proposed framework has been validated on both artificially synthesized and real blurred datasets.
arXiv Detail & Related papers (2024-09-18T21:36:37Z) - Towards Real-World Focus Stacking with Deep Learning [97.34754533628322]
We introduce a new dataset consisting of 94 high-resolution bursts of raw images with focus bracketing.
This dataset is used to train the first deep learning algorithm for focus stacking capable of handling bursts of sufficient length for real-world applications.
arXiv Detail & Related papers (2023-11-29T17:49:33Z) - Learning Depth from Focus in the Wild [16.27391171541217]
We present a convolutional neural network-based depth estimation from single focal stacks.
Our method allows depth maps to be inferred in an end-to-end manner even with image alignment.
For the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras.
arXiv Detail & Related papers (2022-07-20T05:23:29Z) - Precise Point Spread Function Estimation [6.076995573805468]
We develop a precise mathematical model of the camera's point spread function to describe the defocus process.
Our experiments on standard planes and actual objects show that the proposed algorithm can accurately describe the defocus process.
arXiv Detail & Related papers (2022-03-06T12:43:27Z) - VPFNet: Improving 3D Object Detection with Virtual Point based LiDAR and
Stereo Data Fusion [62.24001258298076]
VPFNet is a new architecture that cleverly aligns and aggregates the point cloud and image data at the virtual' points.
Our VPFNet achieves 83.21% moderate 3D AP and 91.86% moderate BEV AP on the KITTI test set, ranking the 1st since May 21th, 2021.
arXiv Detail & Related papers (2021-11-29T08:51:20Z) - Facial Depth and Normal Estimation using Single Dual-Pixel Camera [81.02680586859105]
We introduce a DP-oriented Depth/Normal network that reconstructs the 3D facial geometry.
It contains the corresponding ground-truth 3D models including depth map and surface normal in metric scale.
It achieves state-of-the-art performances over recent DP-based depth/normal estimation methods.
arXiv Detail & Related papers (2021-11-25T05:59:27Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.