Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression
- URL: http://arxiv.org/abs/2103.13629v1
- Date: Thu, 25 Mar 2021 06:56:09 GMT
- Title: Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression
- Authors: Wanhua Li, Xiaoke Huang, Jiwen Lu, Jianjiang Feng, Jie Zhou
- Abstract summary: Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
- Score: 91.3373131262391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty is the only certainty there is. Modeling data uncertainty is
essential for regression, especially in unconstrained settings. Traditionally
the direct regression formulation is considered and the uncertainty is modeled
by modifying the output space to a certain family of probabilistic
distributions. On the other hand, classification based regression and ranking
based solutions are more popular in practice while the direct regression
methods suffer from the limited performance. How to model the uncertainty
within the present-day technologies for regression remains an open issue. In
this paper, we propose to learn probabilistic ordinal embeddings which
represent each data as a multivariate Gaussian distribution rather than a
deterministic point in the latent space. An ordinal distribution constraint is
proposed to exploit the ordinal nature of regression. Our probabilistic ordinal
embeddings can be integrated into popular regression approaches and empower
them with the ability of uncertainty estimation. Experimental results show that
our approach achieves competitive performance. Code is available at
https://github.com/Li-Wanhua/POEs.
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Distribution-free risk assessment of regression-based machine learning
algorithms [6.507711025292814]
We focus on regression algorithms and the risk-assessment task of computing the probability of the true label lying inside an interval defined around the model's prediction.
We solve the risk-assessment problem using the conformal prediction approach, which provides prediction intervals that are guaranteed to contain the true label with a given probability.
arXiv Detail & Related papers (2023-10-05T13:57:24Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - Uncertainty Modeling for Out-of-Distribution Generalization [56.957731893992495]
We argue that the feature statistics can be properly manipulated to improve the generalization ability of deep learning models.
Common methods often consider the feature statistics as deterministic values measured from the learned features.
We improve the network generalization ability by modeling the uncertainty of domain shifts with synthesized feature statistics during training.
arXiv Detail & Related papers (2022-02-08T16:09:12Z) - Uncertainty estimation under model misspecification in neural network
regression [3.2622301272834524]
We study the effect of the model choice on uncertainty estimation.
We highlight that under model misspecification, aleatoric uncertainty is not properly captured.
arXiv Detail & Related papers (2021-11-23T10:18:41Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Logistic Regression Through the Veil of Imprecise Data [0.0]
Logistic regression is an important statistical tool for assessing the probability of an outcome based upon some predictive variables.
Standard methods can only deal with precisely known data, however many datasets have uncertainties which traditional methods either reduce to a single point or completely disregarded.
arXiv Detail & Related papers (2021-06-01T13:51:46Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.