Robust Depth Completion with Uncertainty-Driven Loss Functions
- URL: http://arxiv.org/abs/2112.07895v1
- Date: Wed, 15 Dec 2021 05:22:34 GMT
- Title: Robust Depth Completion with Uncertainty-Driven Loss Functions
- Authors: Yufan Zhu, Weisheng Dong, Leida Li, Jinjian Wu, Xin Li and Guangming
Shi
- Abstract summary: We introduce uncertainty-driven loss functions to improve the robustness of depth completion and handle the uncertainty in depth completion.
Our method has been tested on KITTI Depth Completion Benchmark and achieved the state-of-the-art robustness performance in terms of MAE, IMAE, and IRMSE metrics.
- Score: 60.9237639890582
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recovering a dense depth image from sparse LiDAR scans is a challenging task.
Despite the popularity of color-guided methods for sparse-to-dense depth
completion, they treated pixels equally during optimization, ignoring the
uneven distribution characteristics in the sparse depth map and the accumulated
outliers in the synthesized ground truth. In this work, we introduce
uncertainty-driven loss functions to improve the robustness of depth completion
and handle the uncertainty in depth completion. Specifically, we propose an
explicit uncertainty formulation for robust depth completion with Jeffrey's
prior. A parametric uncertain-driven loss is introduced and translated to new
loss functions that are robust to noisy or missing data. Meanwhile, we propose
a multiscale joint prediction model that can simultaneously predict depth and
uncertainty maps. The estimated uncertainty map is also used to perform
adaptive prediction on the pixels with high uncertainty, leading to a residual
map for refining the completion results. Our method has been tested on KITTI
Depth Completion Benchmark and achieved the state-of-the-art robustness
performance in terms of MAE, IMAE, and IRMSE metrics.
Related papers
- Uncertainty-guided Optimal Transport in Depth Supervised Sparse-View 3D Gaussian [49.21866794516328]
3D Gaussian splatting has demonstrated impressive performance in real-time novel view synthesis.
Previous approaches have incorporated depth supervision into the training of 3D Gaussians to mitigate overfitting.
We introduce a novel method to supervise the depth distribution of 3D Gaussians, utilizing depth priors with integrated uncertainty estimates.
arXiv Detail & Related papers (2024-05-30T03:18:30Z) - Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios [103.72094710263656]
This paper presents a novel approach that identifies and integrates dominant cross-modality depth features with a learning-based framework.
We propose a novel confidence loss steering a confidence predictor network to yield a confidence map specifying latent potential depth areas.
With the resulting confidence map, we propose a multi-modal fusion network that fuses the final depth in an end-to-end manner.
arXiv Detail & Related papers (2024-02-19T04:39:16Z) - Single Image Depth Prediction Made Better: A Multivariate Gaussian Take [163.14849753700682]
We introduce an approach that performs continuous modeling of per-pixel depth.
Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.
arXiv Detail & Related papers (2023-03-31T16:01:03Z) - Gradient-based Uncertainty for Monocular Depth Estimation [5.7575052885308455]
In monocular depth estimation, disturbances in the image context, like moving objects or reflecting materials, can easily lead to erroneous predictions.
We propose a post hoc uncertainty estimation approach for an already trained and thus fixed depth estimation model.
Our approach achieves state-of-the-art uncertainty estimation results on the KITTI and NYU Depth V2 benchmarks without the need to retrain the neural network.
arXiv Detail & Related papers (2022-08-03T12:21:02Z) - Improved Point Transformation Methods For Self-Supervised Depth
Prediction [4.103701929881022]
Given stereo or egomotion image pairs, a popular and successful method for unsupervised learning of monocular depth estimation is to measure the quality of image reconstructions resulting from the learned depth predictions.
This paper introduces a z-buffering algorithm that correctly and efficiently handles points occluded after transformation to a novel viewpoint.
Because our algorithm is implemented with operators typical of machine learning libraries, it can be incorporated into any existing unsupervised depth learning framework with automatic support for differentiation.
arXiv Detail & Related papers (2021-02-18T03:42:40Z) - Deep Multi-view Depth Estimation with Predicted Uncertainty [11.012201499666503]
We employ a dense-optical-flow network to compute correspondences and then triangulate the point cloud to obtain an initial depth map.
To further increase the triangulation accuracy, we introduce a depth-refinement network (DRN) that optimize the initial depth map based on the image's contextual cues.
arXiv Detail & Related papers (2020-11-19T00:22:09Z) - SAFENet: Self-Supervised Monocular Depth Estimation with Semantic-Aware
Feature Extraction [27.750031877854717]
We propose SAFENet that is designed to leverage semantic information to overcome the limitations of the photometric loss.
Our key idea is to exploit semantic-aware depth features that integrate the semantic and geometric knowledge.
Experiments on KITTI dataset demonstrate that our methods compete or even outperform the state-of-the-art methods.
arXiv Detail & Related papers (2020-10-06T17:22:25Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - Uncertainty-Aware CNNs for Depth Completion: Uncertainty from Beginning
to End [18.49954482336334]
We focus on modeling the uncertainty of depth data in depth completion starting from the sparse noisy input all the way to the final prediction.
We propose a novel approach to identify disturbed measurements in the input by learning an input confidence estimator in a self-supervised manner based on the normalized convolutional neural networks (NCNNs)
When we evaluate our approach on the KITTI dataset for depth completion, we outperform all the existing Bayesian Deep Learning approaches in terms of prediction accuracy, quality of the uncertainty measure, and the computational efficiency.
arXiv Detail & Related papers (2020-06-05T10:18:35Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.