Uncertainty Estimation for End-To-End Learned Dense Stereo Matching via
Probabilistic Deep Learning
- URL: http://arxiv.org/abs/2002.03663v1
- Date: Mon, 10 Feb 2020 11:27:52 GMT
- Title: Uncertainty Estimation for End-To-End Learned Dense Stereo Matching via
Probabilistic Deep Learning
- Authors: Max Mehltretter
- Abstract summary: A novel probabilistic neural network is presented for the task of joint depth and uncertainty estimation from epipolar rectified stereo image pairs.
The network learns a probability distribution from which parameters are sampled for every prediction.
The quality of the estimated depth and uncertainty information is assessed in an extensive evaluation on three different datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Motivated by the need to identify erroneous disparity assignments, various
approaches for uncertainty and confidence estimation of dense stereo matching
have been presented in recent years. As in many other fields, especially deep
learning based methods have shown convincing results. However, most of these
methods only model the uncertainty contained in the data, while ignoring the
uncertainty of the employed dense stereo matching procedure. Additionally
modelling the latter, however, is particularly beneficial if the domain of the
training data varies from that of the data to be processed. For this purpose,
in the present work the idea of probabilistic deep learning is applied to the
task of dense stereo matching for the first time. Based on the well-known and
commonly employed GC-Net architecture, a novel probabilistic neural network is
presented, for the task of joint depth and uncertainty estimation from epipolar
rectified stereo image pairs. Instead of learning the network parameters
directly, the proposed probabilistic neural network learns a probability
distribution from which parameters are sampled for every prediction. The
variations between multiple such predictions on the same image pair allow to
approximate the model uncertainty. The quality of the estimated depth and
uncertainty information is assessed in an extensive evaluation on three
different datasets.
Related papers
- Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Uncertainty Quantification in Deep Neural Networks through Statistical
Inference on Latent Space [0.0]
We develop an algorithm that exploits the latent-space representation of data points fed into the network to assess the accuracy of their prediction.
We show on a synthetic dataset that commonly used methods are mostly overconfident.
In contrast, our method can detect such out-of-distribution data points as inaccurately predicted, thus aiding in the automatic detection of outliers.
arXiv Detail & Related papers (2023-05-18T09:52:06Z) - How to Combine Variational Bayesian Networks in Federated Learning [0.0]
Federated learning enables multiple data centers to train a central model collaboratively without exposing any confidential data.
deterministic models are capable of performing high prediction accuracy, their lack of calibration and capability to quantify uncertainty is problematic for safety-critical applications.
We study the effects of various aggregation schemes for variational Bayesian neural networks.
arXiv Detail & Related papers (2022-06-22T07:53:12Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - PDC-Net+: Enhanced Probabilistic Dense Correspondence Network [161.76275845530964]
Enhanced Probabilistic Dense Correspondence Network, PDC-Net+, capable of estimating accurate dense correspondences.
We develop an architecture and an enhanced training strategy tailored for robust and generalizable uncertainty prediction.
Our approach obtains state-of-the-art results on multiple challenging geometric matching and optical flow datasets.
arXiv Detail & Related papers (2021-09-28T17:56:41Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Uncertainty-Aware Deep Classifiers using Generative Models [7.486679152591502]
Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions.
Some recent approaches quantify uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution.
We develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions.
arXiv Detail & Related papers (2020-06-07T15:38:35Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.