Logistic Regression Through the Veil of Imprecise Data
- URL: http://arxiv.org/abs/2106.00492v1
- Date: Tue, 1 Jun 2021 13:51:46 GMT
- Title: Logistic Regression Through the Veil of Imprecise Data
- Authors: Nicholas Gray and Scott Ferson
- Abstract summary: Logistic regression is an important statistical tool for assessing the probability of an outcome based upon some predictive variables.
Standard methods can only deal with precisely known data, however many datasets have uncertainties which traditional methods either reduce to a single point or completely disregarded.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Logistic regression is an important statistical tool for assessing the
probability of an outcome based upon some predictive variables. Standard
methods can only deal with precisely known data, however many datasets have
uncertainties which traditional methods either reduce to a single point or
completely disregarded. In this paper we show that it is possible to include
these uncertainties by considering an imprecise logistic regression model using
the set of possible models that can be obtained from values from within the
intervals. This has the advantage of clearly expressing the epistemic
uncertainty removed by traditional methods.
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Variational Imbalanced Regression: Fair Uncertainty Quantification via
Probabilistic Smoothing [13.339286071690394]
Existing regression models tend to fall short in both accuracy and uncertainty estimation when the label distribution is imbalanced.
We propose a probabilistic deep learning model, dubbed variational imbalanced regression (VIR)
VIR performs well in imbalanced regression but naturally produces reasonable uncertainty estimation as a byproduct.
arXiv Detail & Related papers (2023-06-11T06:27:06Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Prediction Intervals and Confidence Regions for Symbolic Regression
Models based on Likelihood Profiles [0.0]
Quantification of uncertainty of regression models is important for the interpretation of models and for decision making.
The linear approximation and so-called likelihood profiles are well-known possibilities for the calculation of confidence and prediction intervals.
These simple and effective techniques have been completely ignored so far in the genetic programming literature.
arXiv Detail & Related papers (2022-09-14T07:07:55Z) - Imputation-Free Learning from Incomplete Observations [73.15386629370111]
We introduce the importance of guided gradient descent (IGSGD) method to train inference from inputs containing missing values without imputation.
We employ reinforcement learning (RL) to adjust the gradients used to train the models via back-propagation.
Our imputation-free predictions outperform the traditional two-step imputation-based predictions using state-of-the-art imputation methods.
arXiv Detail & Related papers (2021-07-05T12:44:39Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.