Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
- URL: http://arxiv.org/abs/2202.03870v1
- Date: Thu, 3 Feb 2022 10:41:34 GMT
- Title: Maximum Likelihood Uncertainty Estimation: Robustness to Outliers
- Authors: Deebul S. Nair, Nico Hochgeschwender, Miguel A. Olivares-Mendez
- Abstract summary: Outliers or noisy labels in training data results in degraded performances as well as incorrect estimation of uncertainty.
We propose the use of a heavy-tailed distribution to improve the robustness to outliers.
- Score: 3.673994921516517
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: We benchmark the robustness of maximum likelihood based uncertainty
estimation methods to outliers in training data for regression tasks. Outliers
or noisy labels in training data results in degraded performances as well as
incorrect estimation of uncertainty. We propose the use of a heavy-tailed
distribution (Laplace distribution) to improve the robustness to outliers. This
property is evaluated using standard regression benchmarks and on a
high-dimensional regression task of monocular depth estimation, both containing
outliers. In particular, heavy-tailed distribution based maximum likelihood
provides better uncertainty estimates, better separation in uncertainty for
out-of-distribution data, as well as better detection of adversarial attacks in
the presence of outliers.
Related papers
- Heavy-tailed Contamination is Easier than Adversarial Contamination [8.607294463464523]
A body of work in the statistics and computer science communities dating back to Huber (Huber, 1960) has led to statistically and computationally efficient outlier-robust estimators.
Two particular outlier models have received significant attention: the adversarial and heavy-tailed models.
arXiv Detail & Related papers (2024-11-22T19:00:33Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - Robust Variable Selection and Estimation Via Adaptive Elastic Net
S-Estimators for Linear Regression [0.0]
We propose a new robust regularized estimator for simultaneous variable selection and coefficient estimation.
adaptive PENSE possesses the oracle property without prior knowledge of the scale of the residuals.
Numerical studies on simulated and real data sets highlight superior finite-sample performance in a vast range of settings.
arXiv Detail & Related papers (2021-07-07T16:04:08Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors [9.273998041238224]
We show that training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions.
We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions.
arXiv Detail & Related papers (2021-01-13T12:53:54Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - $\gamma$-ABC: Outlier-Robust Approximate Bayesian Computation Based on a
Robust Divergence Estimator [95.71091446753414]
We propose to use a nearest-neighbor-based $gamma$-divergence estimator as a data discrepancy measure.
Our method achieves significantly higher robustness than existing discrepancy measures.
arXiv Detail & Related papers (2020-06-13T06:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.