Camera-Independent Single Image Depth Estimation from Defocus Blur
- URL: http://arxiv.org/abs/2311.13045v1
- Date: Tue, 21 Nov 2023 23:14:42 GMT
- Title: Camera-Independent Single Image Depth Estimation from Defocus Blur
- Authors: Lahiru Wijayasingha, Homa Alemzadeh, John A. Stankovic
- Abstract summary: We show how several camera-related parameters affect the defocus blur using optical physics equations.
We create a synthetic dataset which can be used to test the camera independent performance of depth from defocus blur models.
- Score: 6.516967182213821
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Monocular depth estimation is an important step in many downstream tasks in
machine vision. We address the topic of estimating monocular depth from defocus
blur which can yield more accurate results than the semantic based depth
estimation methods. The existing monocular depth from defocus techniques are
sensitive to the particular camera that the images are taken from. We show how
several camera-related parameters affect the defocus blur using optical physics
equations and how they make the defocus blur depend on these parameters. The
simple correction procedure we propose can alleviate this problem which does
not require any retraining of the original model. We created a synthetic
dataset which can be used to test the camera independent performance of depth
from defocus blur models. We evaluate our model on both synthetic and real
datasets (DDFF12 and NYU depth V2) obtained with different cameras and show
that our methods are significantly more robust to the changes of cameras. Code:
https://github.com/sleekEagle/defocus_camind.git
Related papers
- Blur aware metric depth estimation with multi-focus plenoptic cameras [8.508198765617196]
We present a new metric depth estimation algorithm using only raw images from a multi-focus plenoptic camera.
The proposed approach is especially suited for the multi-focus configuration where several micro-lenses with different focal lengths are used.
arXiv Detail & Related papers (2023-08-08T13:38:50Z) - FS-Depth: Focal-and-Scale Depth Estimation from a Single Image in Unseen
Indoor Scene [57.26600120397529]
It has long been an ill-posed problem to predict absolute depth maps from single images in real (unseen) indoor scenes.
We develop a focal-and-scale depth estimation model to well learn absolute depth maps from single images in unseen indoor scenes.
arXiv Detail & Related papers (2023-07-27T04:49:36Z) - Monocular 3D Object Detection with Depth from Motion [74.29588921594853]
We take advantage of camera ego-motion for accurate object depth estimation and detection.
Our framework, named Depth from Motion (DfM), then uses the established geometry to lift 2D image features to the 3D space and detects 3D objects thereon.
Our framework outperforms state-of-the-art methods by a large margin on the KITTI benchmark.
arXiv Detail & Related papers (2022-07-26T15:48:46Z) - Learning Depth from Focus in the Wild [16.27391171541217]
We present a convolutional neural network-based depth estimation from single focal stacks.
Our method allows depth maps to be inferred in an end-to-end manner even with image alignment.
For the generalization of the proposed network, we develop a simulator to realistically reproduce the features of commercial cameras.
arXiv Detail & Related papers (2022-07-20T05:23:29Z) - Deep Depth from Focal Stack with Defocus Model for Camera-Setting
Invariance [19.460887007137607]
We propose a learning-based depth from focus/defocus (DFF) which takes a focal stack as input for estimating scene depth.
We show that our method is robust against a synthetic-to-real domain gap, and exhibits state-of-the-art performance.
arXiv Detail & Related papers (2022-02-26T04:21:08Z) - Defocus Map Estimation and Deblurring from a Single Dual-Pixel Image [54.10957300181677]
We present a method that takes as input a single dual-pixel image, and simultaneously estimates the image's defocus map.
Our approach improves upon prior works for both defocus map estimation and blur removal, despite being entirely unsupervised.
arXiv Detail & Related papers (2021-10-12T00:09:07Z) - Single image deep defocus estimation and its applications [82.93345261434943]
We train a deep neural network to classify image patches into one of the 20 levels of blurriness.
The trained model is used to determine the patch blurriness which is then refined by applying an iterative weighted guided filter.
The result is a defocus map that carries the information of the degree of blurriness for each pixel.
arXiv Detail & Related papers (2021-07-30T06:18:16Z) - Defocus Blur Detection via Depth Distillation [64.78779830554731]
We introduce depth information into DBD for the first time.
In detail, we learn the defocus blur from ground truth and the depth distilled from a well-trained depth estimation network.
Our approach outperforms 11 other state-of-the-art methods on two popular datasets.
arXiv Detail & Related papers (2020-07-16T04:58:09Z) - Rendering Natural Camera Bokeh Effect with Deep Learning [95.86933125733673]
Bokeh is an important artistic effect used to highlight the main object of interest on the photo.
Mobile cameras are unable to produce shallow depth-of-field photos due to a very small aperture diameter of their optics.
We propose to learn a realistic shallow focus technique directly from the photos produced by DSLR cameras.
arXiv Detail & Related papers (2020-06-10T07:28:06Z) - Defocus Deblurring Using Dual-Pixel Data [41.201653787083735]
Defocus blur arises in images that are captured with a shallow depth of field due to the use of a wide aperture.
We propose an effective defocus deblurring method that exploits data available on dual-pixel (DP) sensors found on most modern cameras.
arXiv Detail & Related papers (2020-05-01T10:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.