Single Image Depth Estimation: An Overview
- URL: http://arxiv.org/abs/2104.06456v1
- Date: Tue, 13 Apr 2021 18:58:37 GMT
- Title: Single Image Depth Estimation: An Overview
- Authors: Alican Mertan, Damien Jade Duff and Gozde Unal
- Abstract summary: We focus on the single image depth estimation problem.
Due to its properties, the single image depth estimation problem is best tackled with machine learning methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We review solutions to the problem of depth estimation, arguably the most
important subtask in scene understanding. We focus on the single image depth
estimation problem. Due to its properties, the single image depth estimation
problem is currently best tackled with machine learning methods, most
successfully with convolutional neural networks. We provide an overview of the
field by examining key works. We examine non-deep learning approaches that
mostly predate deep learning and utilize hand-crafted features and assumptions,
and more recent works that mostly use deep learning techniques. The single
image depth estimation problem is tackled first in a supervised fashion with
absolute or relative depth information acquired from human or sensor-labeled
data, or in an unsupervised way using unlabelled stereo images or video
datasets. We also study multitask approaches that combine the depth estimation
problem with related tasks such as semantic segmentation and surface normal
estimation. Finally, we discuss investigations into the mechanisms, principles,
and failure cases of contemporary solutions.
Related papers
- Learning Occlusion-Aware Coarse-to-Fine Depth Map for Self-supervised
Monocular Depth Estimation [11.929584800629673]
We propose a novel network to learn an Occlusion-aware Coarse-to-Fine Depth map for self-supervised monocular depth estimation.
The proposed OCFD-Net does not only employ a discrete depth constraint for learning a coarse-level depth map, but also employ a continuous depth constraint for learning a scene depth residual.
arXiv Detail & Related papers (2022-03-21T12:43:42Z) - Deep Image Deblurring: A Survey [165.32391279761006]
Deblurring is a classic problem in low-level computer vision, which aims to recover a sharp image from a blurred input image.
Recent advances in deep learning have led to significant progress in solving this problem.
arXiv Detail & Related papers (2022-01-26T01:31:30Z) - Depth Refinement for Improved Stereo Reconstruction [13.941756438712382]
Current techniques for depth estimation from stereoscopic images still suffer from a built-in drawback.
A simple analysis reveals that the depth error is quadratically proportional to the object's distance.
We propose a simple but effective method that uses a refinement network for depth estimation.
arXiv Detail & Related papers (2021-12-15T12:21:08Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - Geometry Uncertainty Projection Network for Monocular 3D Object
Detection [138.24798140338095]
We propose a Geometry Uncertainty Projection Network (GUP Net) to tackle the error amplification problem at both inference and training stages.
Specifically, a GUP module is proposed to obtains the geometry-guided uncertainty of the inferred depth.
At the training stage, we propose a Hierarchical Task Learning strategy to reduce the instability caused by error amplification.
arXiv Detail & Related papers (2021-07-29T06:59:07Z) - Towards Better Generalization: Joint Depth-Pose Learning without PoseNet [36.414471128890284]
We tackle the essential problem of scale inconsistency for self-supervised joint depth-pose learning.
Most existing methods assume that a consistent scale of depth and pose can be learned across all input samples.
We propose a novel system that explicitly disentangles scale from the network estimation.
arXiv Detail & Related papers (2020-04-03T00:28:09Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Monocular Depth Estimation Based On Deep Learning: An Overview [16.2543991384566]
Inferring depth information from a single image (monocular depth estimation) is an ill-posed problem.
Deep learning has been widely studied recently and achieved promising performance in accuracy.
In order to improve the accuracy of depth estimation, different kinds of network frameworks, loss functions and training strategies are proposed.
arXiv Detail & Related papers (2020-03-14T12:35:34Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z) - Don't Forget The Past: Recurrent Depth Estimation from Monocular Video [92.84498980104424]
We put three different types of depth estimation into a common framework.
Our method produces a time series of depth maps.
It can be applied to monocular videos only or be combined with different types of sparse depth patterns.
arXiv Detail & Related papers (2020-01-08T16:50:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.