Robust leave-one-out cross-validation for high-dimensional Bayesian
models
- URL: http://arxiv.org/abs/2209.09190v2
- Date: Wed, 27 Sep 2023 10:22:38 GMT
- Title: Robust leave-one-out cross-validation for high-dimensional Bayesian
models
- Authors: Luca Silva and Giacomo Zanella
- Abstract summary: Leave-one-out cross-validation (LOO-CV) is a popular method for estimating out-of-sample predictive accuracy.
Here we propose and analyze a novel mixture estimator to compute LOO-CV criteria.
Our method retains the simplicity and computational convenience of classical approaches, while guaranteeing finite variance of the resulting estimators.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Leave-one-out cross-validation (LOO-CV) is a popular method for estimating
out-of-sample predictive accuracy. However, computing LOO-CV criteria can be
computationally expensive due to the need to fit the model multiple times. In
the Bayesian context, importance sampling provides a possible solution but
classical approaches can easily produce estimators whose asymptotic variance is
infinite, making them potentially unreliable. Here we propose and analyze a
novel mixture estimator to compute Bayesian LOO-CV criteria. Our method retains
the simplicity and computational convenience of classical approaches, while
guaranteeing finite asymptotic variance of the resulting estimators. Both
theoretical and numerical results are provided to illustrate the improved
robustness and efficiency. The computational benefits are particularly
significant in high-dimensional problems, allowing to perform Bayesian LOO-CV
for a broader range of models, and datasets with highly influential
observations. The proposed methodology is easily implementable in standard
probabilistic programming software and has a computational cost roughly
equivalent to fitting the original model once.
Related papers
- A sparse PAC-Bayesian approach for high-dimensional quantile prediction [0.0]
This paper presents a novel probabilistic machine learning approach for high-dimensional quantile prediction.
It uses a pseudo-Bayesian framework with a scaled Student-t prior and Langevin Monte Carlo for efficient computation.
Its effectiveness is validated through simulations and real-world data, where it performs competitively against established frequentist and Bayesian techniques.
arXiv Detail & Related papers (2024-09-03T08:01:01Z) - Learning Unnormalized Statistical Models via Compositional Optimization [73.30514599338407]
Noise-contrastive estimation(NCE) has been proposed by formulating the objective as the logistic loss of the real data and the artificial noise.
In this paper, we study it a direct approach for optimizing the negative log-likelihood of unnormalized models.
arXiv Detail & Related papers (2023-06-13T01:18:16Z) - Scalable Estimation for Structured Additive Distributional Regression [0.0]
We propose a novel backfitting algorithm, which is based on the ideas of gradient descent and can deal virtually with any amount of data on a conventional laptop.
Performance is evaluated using an extensive simulation study and an exceptionally challenging and unique example of lightning count prediction over Austria.
arXiv Detail & Related papers (2023-01-13T14:59:42Z) - Reliability analysis of discrete-state performance functions via
adaptive sequential sampling with detection of failure surfaces [0.0]
The paper presents a new efficient and robust method for rare event probability estimation.
The method can estimate the probabilities of multiple failure types.
It can accommodate this information to increase the accuracy of the estimated probabilities.
arXiv Detail & Related papers (2022-08-04T05:59:25Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Evaluating State-of-the-Art Classification Models Against Bayes
Optimality [106.50867011164584]
We show that we can compute the exact Bayes error of generative models learned using normalizing flows.
We use our approach to conduct a thorough investigation of state-of-the-art classification models.
arXiv Detail & Related papers (2021-06-07T06:21:20Z) - Scalable Marginal Likelihood Estimation for Model Selection in Deep
Learning [78.83598532168256]
Marginal-likelihood based model-selection is rarely used in deep learning due to estimation difficulties.
Our work shows that marginal likelihoods can improve generalization and be useful when validation data is unavailable.
arXiv Detail & Related papers (2021-04-11T09:50:24Z) - Approximate Cross-validated Mean Estimates for Bayesian Hierarchical Regression Models [6.824747267214373]
We introduce a novel procedure for obtaining cross-validated predictive estimates for Bayesian hierarchical regression models.
We provide theoretical results and demonstrate its efficacy on publicly available data and in simulations.
arXiv Detail & Related papers (2020-11-29T00:00:20Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.