Uncertainty Estimation for Multi-view Data: The Power of Seeing the
Whole Picture
- URL: http://arxiv.org/abs/2210.02676v1
- Date: Thu, 6 Oct 2022 04:47:51 GMT
- Title: Uncertainty Estimation for Multi-view Data: The Power of Seeing the
Whole Picture
- Authors: Myong Chol Jung, He Zhao, Joanna Dipnall, Belinda Gabbe, Lan Du
- Abstract summary: Uncertainty estimation is essential to make neural networks trustworthy in real-world applications.
We propose a new multi-view classification framework for better uncertainty estimation and out-of-domain sample detection.
- Score: 5.868139834982011
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Uncertainty estimation is essential to make neural networks trustworthy in
real-world applications. Extensive research efforts have been made to quantify
and reduce predictive uncertainty. However, most existing works are designed
for unimodal data, whereas multi-view uncertainty estimation has not been
sufficiently investigated. Therefore, we propose a new multi-view
classification framework for better uncertainty estimation and out-of-domain
sample detection, where we associate each view with an uncertainty-aware
classifier and combine the predictions of all the views in a principled way.
The experimental results with real-world datasets demonstrate that our proposed
approach is an accurate, reliable, and well-calibrated classifier, which
predominantly outperforms the multi-view baselines tested in terms of expected
calibration error, robustness to noise, and accuracy for the in-domain sample
classification and the out-of-domain sample detection tasks.
Related papers
- Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning [18.419742575630217]
This paper introduces a novel algorithm based on H"older Divergence (HD) to enhance the reliability of multi-view learning.
Through the Dempster-Shafer theory, integration of uncertainty from different modalities, thereby generating a comprehensive result.
Mathematically, HD proves to better measure the distance'' between real data distribution and predictive distribution of the model.
arXiv Detail & Related papers (2024-10-29T04:29:44Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Exploring and Exploiting Uncertainty for Incomplete Multi-View
Classification [47.82610025809371]
We propose an Uncertainty-induced Incomplete Multi-View Data Classification (UIMC) model to classify incomplete multi-view data.
Specifically, we model each missing data with a distribution conditioning on the available views and thus introducing uncertainty.
Our method establishes a state-of-the-art performance in terms of both performance and trustworthiness.
arXiv Detail & Related papers (2023-04-11T11:57:48Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Predictive Inference with Weak Supervision [3.1925030748447747]
We bridge the gap between partial supervision and validation by developing a conformal prediction framework.
We introduce a new notion of coverage and predictive validity, then develop several application scenarios.
We corroborate the hypothesis that the new coverage definition allows for tighter and more informative (but valid) confidence sets.
arXiv Detail & Related papers (2022-01-20T17:26:52Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.