IronDepth: Iterative Refinement of Single-View Depth using Surface
Normal and its Uncertainty
- URL: http://arxiv.org/abs/2210.03676v1
- Date: Fri, 7 Oct 2022 16:34:20 GMT
- Title: IronDepth: Iterative Refinement of Single-View Depth using Surface
Normal and its Uncertainty
- Authors: Gwangbin Bae, Ignas Budvytis, Roberto Cipolla
- Abstract summary: We introduce a novel framework that uses surface normal and its uncertainty to recurrently refine the predicted depth-map.
The proposed method shows state-of-the-art performance on NYUv2 and iBims-1 - both in terms of depth and normal.
- Score: 24.4764181300196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Single image surface normal estimation and depth estimation are closely
related problems as the former can be calculated from the latter. However, the
surface normals computed from the output of depth estimation methods are
significantly less accurate than the surface normals directly estimated by
networks. To reduce such discrepancy, we introduce a novel framework that uses
surface normal and its uncertainty to recurrently refine the predicted
depth-map. The depth of each pixel can be propagated to a query pixel, using
the predicted surface normal as guidance. We thus formulate depth refinement as
a classification of choosing the neighboring pixel to propagate from. Then, by
propagating to sub-pixel points, we upsample the refined, low-resolution
output. The proposed method shows state-of-the-art performance on NYUv2 and
iBims-1 - both in terms of depth and normal. Our refinement module can also be
attached to the existing depth estimation methods to improve their accuracy. We
also show that our framework, only trained for depth estimation, can also be
used for depth completion. The code is available at
https://github.com/baegwangbin/IronDepth.
Related papers
- D2NT: A High-Performing Depth-to-Normal Translator [14.936434857460622]
This paper presents a superfast depth-to-normal translator (D2NT) that can directly translate depth images into surface normal maps without calculating 3D coordinates.
We then propose a discontinuity-aware gradient filter (DAG) and a surface normal refinement module that can easily be integrated into any depth-to-normal SNEs.
Our proposed algorithm demonstrates the best accuracy among all other existing real-time SNEs and achieves the SoTA trade-off between efficiency and accuracy.
arXiv Detail & Related papers (2023-04-24T12:08:03Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior [133.76192155312182]
We propose a method that learns to selectively leverage information from coplanar pixels to improve the predicted depth.
An extensive evaluation of our method shows that we set the new state of the art in supervised monocular depth estimation.
arXiv Detail & Related papers (2022-04-05T10:03:52Z) - Depth Completion using Plane-Residual Representation [84.63079529738924]
We introduce a novel way of interpreting depth information with the closest depth plane label $p$ and a residual value $r$, as we call it, Plane-Residual (PR) representation.
By interpreting depth information in PR representation and using our corresponding depth completion network, we were able to acquire improved depth completion performance with faster computation.
arXiv Detail & Related papers (2021-04-15T10:17:53Z) - GeoNet++: Iterative Geometric Neural Network with Edge-Aware Refinement
for Joint Depth and Surface Normal Estimation [204.13451624763735]
We propose a geometric neural network with edge-aware refinement (GeoNet++) to jointly predict both depth and surface normal maps from a single image.
GeoNet++ effectively predicts depth and surface normals with strong 3D consistency and sharp boundaries.
In contrast to current metrics that focus on evaluating pixel-wise error/accuracy, 3DGM measures whether the predicted depth can reconstruct high-quality 3D surface normals.
arXiv Detail & Related papers (2020-12-13T06:48:01Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Deep Depth Estimation from Visual-Inertial SLAM [11.814395824799988]
We study the case in which the sparse depth is computed from a visual-inertial simultaneous localization and mapping (VI-SLAM) system.
The resulting point cloud has low density, it is noisy, and has non-uniform spatial distribution.
We use the available gravity estimate from the VI-SLAM to warp the input image to the orientation prevailing in the training dataset.
arXiv Detail & Related papers (2020-07-31T21:28:25Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - Predicting Sharp and Accurate Occlusion Boundaries in Monocular Depth
Estimation Using Displacement Fields [25.3479048674598]
Current methods for depth map prediction from monocular images tend to predict smooth, poorly localized contours.
We learn to predict, given a depth map predicted by some reconstruction method, a 2D displacement field able to re-sample pixels around the occlusion boundaries into sharper reconstructions.
Our method can be applied to the output of any depth estimation method, in an end-to-end trainable fashion.
arXiv Detail & Related papers (2020-02-28T14:15:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.