Prediction Errors for Penalized Regressions based on Generalized
Approximate Message Passing
- URL: http://arxiv.org/abs/2206.12832v2
- Date: Wed, 29 Jun 2022 17:34:30 GMT
- Title: Prediction Errors for Penalized Regressions based on Generalized
Approximate Message Passing
- Authors: Ayaka Sakata
- Abstract summary: We derive the forms of estimators for the prediction errors: $C_p$ criterion, information criteria, and leave-one-out cross validation (LOOCV) error.
In the framework of GAMP, we show that the information criteria can be expressed by using the variance of the estimates.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We discuss the prediction accuracy of assumed statistical models in terms of
prediction errors for the generalized linear model and penalized maximum
likelihood methods. We derive the forms of estimators for the prediction
errors: $C_p$ criterion, information criteria, and leave-one-out cross
validation (LOOCV) error, using the generalized approximate message passing
(GAMP) algorithm and replica method. These estimators coincide with each other
when the number of model parameters is sufficiently small; however, there is a
discrepancy between them in particular in the overparametrized region where the
number of model parameters is larger than the data dimension. In this paper, we
review the prediction errors and corresponding estimators, and discuss their
differences. In the framework of GAMP, we show that the information criteria
can be expressed by using the variance of the estimates. Further, we
demonstrate how to approach LOOCV error from the information criteria by
utilizing the expression provided by GAMP.
Related papers
- Precise Model Benchmarking with Only a Few Observations [6.092112060364272]
We propose an empirical Bayes (EB) estimator that balances direct and regression estimates for each subgroup separately.
EB consistently provides more precise estimates of the LLM performance compared to the direct and regression approaches.
arXiv Detail & Related papers (2024-10-07T17:26:31Z) - Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - Leveraging Variational Autoencoders for Parameterized MMSE Estimation [10.141454378473972]
We propose a variational autoencoder-based framework for parameterizing a conditional linear minimum mean squared error estimator.
The derived estimator is shown to approximate the minimum mean squared error estimator by utilizing the variational autoencoder as a generative prior for the estimation problem.
We conduct a rigorous analysis by bounding the difference between the proposed and the minimum mean squared error estimator.
arXiv Detail & Related papers (2023-07-11T15:41:34Z) - A Statistical Model for Predicting Generalization in Few-Shot
Classification [6.158812834002346]
We introduce a Gaussian model of the feature distribution to predict the generalization error.
We show that our approach outperforms alternatives such as the leave-one-out cross-validation strategy.
arXiv Detail & Related papers (2022-12-13T10:21:15Z) - Understanding Factual Errors in Summarization: Errors, Summarizers,
Datasets, Error Detectors [105.12462629663757]
In this work, we aggregate factuality error annotations from nine existing datasets and stratify them according to the underlying summarization model.
We compare performance of state-of-the-art factuality metrics, including recent ChatGPT-based metrics, on this stratified benchmark and show that their performance varies significantly across different types of summarization models.
arXiv Detail & Related papers (2022-05-25T15:26:48Z) - Benign-Overfitting in Conditional Average Treatment Effect Prediction
with Linear Regression [14.493176427999028]
We study the benign overfitting theory in the prediction of the conditional average treatment effect (CATE) with linear regression models.
We show that the T-learner fails to achieve the consistency except the random assignment, while the IPW-learner converges the risk to zero if the propensity score is known.
arXiv Detail & Related papers (2022-02-10T18:51:52Z) - CovarianceNet: Conditional Generative Model for Correct Covariance
Prediction in Human Motion Prediction [71.31516599226606]
We present a new method to correctly predict the uncertainty associated with the predicted distribution of future trajectories.
Our approach, CovariaceNet, is based on a Conditional Generative Model with Gaussian latent variables.
arXiv Detail & Related papers (2021-09-07T09:38:24Z) - Minimum Excess Risk in Bayesian Learning [23.681494934015927]
We analyze the best achievable performance of Bayesian learning under generative models by defining and upper-bounding the minimum excess risk (MER)
The definition of MER provides a principled way to define different notions of uncertainties in Bayesian learning.
arXiv Detail & Related papers (2020-12-29T17:41:30Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Performance metrics for intervention-triggering prediction models do not
reflect an expected reduction in outcomes from using the model [71.9860741092209]
Clinical researchers often select among and evaluate risk prediction models.
Standard metrics calculated from retrospective data are only related to model utility under certain assumptions.
When predictions are delivered repeatedly throughout time, the relationship between standard metrics and utility is further complicated.
arXiv Detail & Related papers (2020-06-02T16:26:49Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.