Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth
Estimation Using Displacement Fields
- URL: http://arxiv.org/abs/2002.12730v3
- Date: Sun, 10 May 2020 23:12:00 GMT
- Title: Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth
Estimation Using Displacement Fields
- Authors: Michael Ramamonjisoa, Yuming Du, Vincent Lepetit
- Abstract summary: Current methods for depth map prediction from monocular images tend to predict smooth, poorly localized contours.
We learn to predict, given a depth map predicted by some reconstruction method, a 2D displacement field able to re-sample pixels around the occlusion boundaries into sharper reconstructions.
Our method can be applied to the output of any depth estimation method, in an end-to-end trainable fashion.
- Score: 25.3479048674598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current methods for depth map prediction from monocular images tend to
predict smooth, poorly localized contours for the occlusion boundaries in the
input image. This is unfortunate as occlusion boundaries are important cues to
recognize objects, and as we show, may lead to a way to discover new objects
from scene reconstruction. To improve predicted depth maps, recent methods rely
on various forms of filtering or predict an additive residual depth map to
refine a first estimate. We instead learn to predict, given a depth map
predicted by some reconstruction method, a 2D displacement field able to
re-sample pixels around the occlusion boundaries into sharper reconstructions.
Our method can be applied to the output of any depth estimation method, in an
end-to-end trainable fashion. For evaluation, we manually annotated the
occlusion boundaries in all the images in the test split of popular NYUv2-Depth
dataset. We show that our approach improves the localization of occlusion
boundaries for all state-of-the-art monocular depth estimation methods that we
could evaluate, without degrading the depth accuracy for the rest of the
images.
Related papers
- Temporal Lidar Depth Completion [0.08192907805418582]
We show how a state-of-the-art method PENet can be modified to benefit from recurrency.
Our algorithm achieves state-of-the-art results on the KITTI depth completion dataset.
arXiv Detail & Related papers (2024-06-17T08:25:31Z) - AugUndo: Scaling Up Augmentations for Monocular Depth Completion and Estimation [51.143540967290114]
We propose a method that unlocks a wide range of previously-infeasible geometric augmentations for unsupervised depth computation and estimation.
This is achieved by reversing, or undo''-ing, geometric transformations to the coordinates of the output depth, warping the depth map back to the original reference frame.
arXiv Detail & Related papers (2023-10-15T05:15:45Z) - Learning to Recover 3D Scene Shape from a Single Image [98.20106822614392]
We propose a two-stage framework that first predicts depth up to an unknown scale and shift from a single monocular image.
We then use 3D point cloud encoders to predict the missing depth shift and focal length that allow us to recover a realistic 3D scene shape.
arXiv Detail & Related papers (2020-12-17T02:35:13Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Deep Multi-Scale Feature Learning for Defocus Blur Estimation [10.455763145066168]
This paper presents an edge-based defocus blur estimation method from a single defocused image.
We first distinguish edges that lie at depth discontinuities (called depth edges, for which the blur estimate is ambiguous) from edges that lie at approximately constant depth regions (called pattern edges, for which the blur estimate is well-defined).
We estimate the defocus blur amount at pattern edges only, and explore an scheme based on guided filters that prevents data propagation across the detected depth edges to obtain a dense blur map with well-defined object boundaries.
arXiv Detail & Related papers (2020-09-24T20:36:40Z) - View-consistent 4D Light Field Depth Estimation [37.04038603184669]
We propose a method to compute depth maps for every sub-aperture image in a light field in a view consistent way.
Our method precisely defines depth edges via EPIs, then we diffuse these edges spatially within the central view.
arXiv Detail & Related papers (2020-09-09T01:47:34Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.