ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation
- URL: http://arxiv.org/abs/2303.16866v1
- Date: Wed, 29 Mar 2023 17:24:12 GMT
- Title: ALUM: Adversarial Data Uncertainty Modeling from Latent Model
Uncertainty Compensation
- Authors: Wei Wei, Jiahuan Zhou, Hongze Li, Ying Wu
- Abstract summary: We propose a novel method called ALUM to handle the model uncertainty and data uncertainty in a unified scheme.
Our proposed ALUM is model-agnostic which can be easily implemented into any existing deep model with little extra overhead.
- Score: 25.67258563807856
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: It is critical that the models pay attention not only to accuracy but also to
the certainty of prediction. Uncertain predictions of deep models caused by
noisy data raise significant concerns in trustworthy AI areas. To explore and
handle uncertainty due to intrinsic data noise, we propose a novel method
called ALUM to simultaneously handle the model uncertainty and data uncertainty
in a unified scheme. Rather than solely modeling data uncertainty in the
ultimate layer of a deep model based on randomly selected training data, we
propose to explore mined adversarial triplets to facilitate data uncertainty
modeling and non-parametric uncertainty estimations to compensate for the
insufficiently trained latent model layers. Thus, the critical data uncertainty
and model uncertainty caused by noisy data can be readily quantified for
improving model robustness. Our proposed ALUM is model-agnostic which can be
easily implemented into any existing deep model with little extra computation
overhead. Extensive experiments on various noisy learning tasks validate the
superior robustness and generalization ability of our method. The code is
released at https://github.com/wwzjer/ALUM.
Related papers
- Uncertainty Quantification of Surrogate Models using Conformal Prediction [7.445864392018774]
We formalise a conformal prediction framework that satisfies predictions in a model-agnostic manner, requiring near-zero computational costs.
The paper looks at providing statistically valid error bars for deterministic models, as well as crafting guarantees to the error bars of probabilistic models.
arXiv Detail & Related papers (2024-08-19T10:46:19Z) - Ensemble models outperform single model uncertainties and predictions
for operator-learning of hypersonic flows [43.148818844265236]
Training scientific machine learning (SciML) models on limited high-fidelity data offers one approach to rapidly predict behaviors for situations that have not been seen before.
High-fidelity data is itself in limited quantity to validate all outputs of the SciML model in unexplored input space.
We extend a DeepONet using three different uncertainty mechanisms: mean-variance estimation, evidential uncertainty, and ensembling.
arXiv Detail & Related papers (2023-10-31T18:07:29Z) - Measuring and Modeling Uncertainty Degree for Monocular Depth Estimation [50.920911532133154]
The intrinsic ill-posedness and ordinal-sensitive nature of monocular depth estimation (MDE) models pose major challenges to the estimation of uncertainty degree.
We propose to model the uncertainty of MDE models from the perspective of the inherent probability distributions.
By simply introducing additional training regularization terms, our model, with surprisingly simple formations and without requiring extra modules or multiple inferences, can provide uncertainty estimations with state-of-the-art reliability.
arXiv Detail & Related papers (2023-07-19T12:11:15Z) - Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Reliable Multimodal Trajectory Prediction via Error Aligned Uncertainty
Optimization [11.456242421204298]
In a well-calibrated model, uncertainty estimates should perfectly correlate with model error.
We propose a novel error aligned uncertainty optimization method and introduce a trainable loss function to guide the models to yield good quality uncertainty estimates aligning with the model error.
We demonstrate that our method improves average displacement error by 1.69% and 4.69%, and the uncertainty correlation with model error by 17.22% and 19.13% as quantified by Pearson correlation coefficient on two state-of-the-art baselines.
arXiv Detail & Related papers (2022-12-09T12:33:26Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Data Uncertainty without Prediction Models [0.8223798883838329]
We propose an uncertainty estimation method named a Distance-weighted Class Impurity without explicit use of prediction models.
We verified that the Distance-weighted Class Impurity works effectively regardless of prediction models.
arXiv Detail & Related papers (2022-04-25T13:26:06Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.