Learning the Distribution of Errors in Stereo Matching for Joint
Disparity and Uncertainty Estimation
- URL: http://arxiv.org/abs/2304.00152v1
- Date: Fri, 31 Mar 2023 21:58:19 GMT
- Title: Learning the Distribution of Errors in Stereo Matching for Joint
Disparity and Uncertainty Estimation
- Authors: Liyan Chen, Weihan Wang, Philippos Mordohai
- Abstract summary: We present a new loss function for joint disparity and uncertainty estimation in deep stereo matching.
We experimentally assess the effectiveness of our approach and observe significant improvements in both disparity and uncertainty prediction on large datasets.
- Score: 8.057006406834466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new loss function for joint disparity and uncertainty estimation
in deep stereo matching. Our work is motivated by the need for precise
uncertainty estimates and the observation that multi-task learning often leads
to improved performance in all tasks. We show that this can be achieved by
requiring the distribution of uncertainty to match the distribution of
disparity errors via a KL divergence term in the network's loss function. A
differentiable soft-histogramming technique is used to approximate the
distributions so that they can be used in the loss. We experimentally assess
the effectiveness of our approach and observe significant improvements in both
disparity and uncertainty prediction on large datasets.
Related papers
- Learning Latent Graph Structures and their Uncertainty [63.95971478893842]
Graph Neural Networks (GNNs) use relational information as an inductive bias to enhance the model's accuracy.
As task-relevant relations might be unknown, graph structure learning approaches have been proposed to learn them while solving the downstream prediction task.
arXiv Detail & Related papers (2024-05-30T10:49:22Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Towards In-distribution Compatibility in Out-of-distribution Detection [30.49191281345763]
We propose a new out-of-distribution detection method by adapting both the top-design of deep models and the loss function.
Our method achieves the state-of-the-art out-of-distribution detection performance but also improves the in-distribution accuracy.
arXiv Detail & Related papers (2022-08-29T09:06:15Z) - Decomposing Representations for Deterministic Uncertainty Estimation [34.11413246048065]
We show that current feature density based uncertainty estimators cannot perform well consistently across different OoD detection settings.
We propose to decompose the learned representations and integrate the uncertainties estimated on them separately.
arXiv Detail & Related papers (2021-12-01T22:12:01Z) - Predicting with Confidence on Unseen Distributions [90.68414180153897]
We connect domain adaptation and predictive uncertainty literature to predict model accuracy on challenging unseen distributions.
We find that the difference of confidences (DoC) of a classifier's predictions successfully estimates the classifier's performance change over a variety of shifts.
We specifically investigate the distinction between synthetic and natural distribution shifts and observe that despite its simplicity DoC consistently outperforms other quantifications of distributional difference.
arXiv Detail & Related papers (2021-07-07T15:50:18Z) - Logit-based Uncertainty Measure in Classification [18.224344440110862]
We introduce a new, reliable, and agnostic uncertainty measure for classification tasks called logit uncertainty.
We show that this new uncertainty measure yields a superior performance compared to existing uncertainty measures on different tasks.
arXiv Detail & Related papers (2021-07-06T19:07:16Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Learning Calibrated Uncertainties for Domain Shift: A Distributionally
Robust Learning Approach [150.8920602230832]
We propose a framework for learning calibrated uncertainties under domain shifts.
In particular, the density ratio estimation reflects the closeness of a target (test) sample to the source (training) distribution.
We show that our proposed method generates calibrated uncertainties that benefit downstream tasks.
arXiv Detail & Related papers (2020-10-08T02:10:54Z) - Accurate and Robust Feature Importance Estimation under Distribution
Shifts [49.58991359544005]
PRoFILE is a novel feature importance estimation method.
We show significant improvements over state-of-the-art approaches, both in terms of fidelity and robustness.
arXiv Detail & Related papers (2020-09-30T05:29:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.