Performance of Bayesian linear regression in a model with mismatch
- URL: http://arxiv.org/abs/2107.06936v1
- Date: Wed, 14 Jul 2021 18:50:13 GMT
- Title: Performance of Bayesian linear regression in a model with mismatch
- Authors: Jean Barbier, Wei-Kuo Chen, Dmitry Panchenko, and Manuel S\'aenz
- Abstract summary: We analyze the performance of an estimator given by the mean of a log-concave Bayesian posterior distribution with gaussian prior.
This inference model can be rephrased as a version of the Gardner model in spin glasses.
- Score: 8.60118148262922
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: For a model of high-dimensional linear regression with random design, we
analyze the performance of an estimator given by the mean of a log-concave
Bayesian posterior distribution with gaussian prior. The model is mismatched in
the following sense: like the model assumed by the statistician, the
labels-generating process is linear in the input data, but both the classifier
ground-truth prior and gaussian noise variance are unknown to her. This
inference model can be rephrased as a version of the Gardner model in spin
glasses and, using the cavity method, we provide fixed point equations for
various overlap order parameters, yielding in particular an expression for the
mean-square reconstruction error on the classifier (under an assumption of
uniqueness of solutions). As a direct corollary we obtain an expression for the
free energy. Similar models have already been studied by Shcherbina and Tirozzi
and by Talagrand, but our arguments are more straightforward and some
assumptions are relaxed. An interesting consequence of our analysis is that in
the random design setting of ridge regression, the performance of the posterior
mean is independent of the noise variance (or "temperature") assumed by the
statistician, and matches the one of the usual (zero temperature) ridge
estimator.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Scaling and renormalization in high-dimensional regression [72.59731158970894]
This paper presents a succinct derivation of the training and generalization performance of a variety of high-dimensional ridge regression models.
We provide an introduction and review of recent results on these topics, aimed at readers with backgrounds in physics and deep learning.
arXiv Detail & Related papers (2024-05-01T15:59:00Z) - High-dimensional analysis of ridge regression for non-identically distributed data with a variance profile [0.0]
We study the predictive risk of the ridge estimator for linear regression with a variance profile.
For certain class of variance profile, our work highlights the emergence of the well-known double descent phenomenon.
We also investigate the similarities and differences that exist with the standard setting of independent and identically distributed data.
arXiv Detail & Related papers (2024-03-29T14:24:49Z) - A probabilistic, data-driven closure model for RANS simulations with aleatoric, model uncertainty [1.8416014644193066]
We propose a data-driven, closure model for Reynolds-averaged Navier-Stokes (RANS) simulations that incorporates aleatoric, model uncertainty.
A fully Bayesian formulation is proposed, combined with a sparsity-inducing prior in order to identify regions in the problem domain where the parametric closure is insufficient.
arXiv Detail & Related papers (2023-07-05T16:53:31Z) - Engression: Extrapolation through the Lens of Distributional Regression [2.519266955671697]
We propose a neural network-based distributional regression methodology called engression'
An engression model is generative in the sense that we can sample from the fitted conditional distribution and is also suitable for high-dimensional outcomes.
We show that engression can successfully perform extrapolation under some assumptions such as monotonicity, whereas traditional regression approaches such as least-squares or quantile regression fall short under the same assumptions.
arXiv Detail & Related papers (2023-07-03T08:19:00Z) - High-dimensional analysis of double descent for linear regression with
random projections [0.0]
We consider linear regression problems with a varying number of random projections, where we provably exhibit a double descent curve for a fixed prediction problem.
We first consider the ridge regression estimator and re-interpret earlier results using classical notions from non-parametric statistics.
We then compute equivalents of the generalization performance (in terms of bias and variance) of the minimum norm least-squares fit with random projections, providing simple expressions for the double descent phenomenon.
arXiv Detail & Related papers (2023-03-02T15:58:09Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Efficient Truncated Linear Regression with Unknown Noise Variance [26.870279729431328]
We provide the first computationally and statistically efficient estimators for truncated linear regression when the noise variance is unknown.
Our estimator is based on an efficient implementation of Projected Gradient Descent on the negative-likelihood of the truncated sample.
arXiv Detail & Related papers (2022-08-25T12:17:37Z) - Uncertainty Inspired RGB-D Saliency Detection [70.50583438784571]
We propose the first framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process.
Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection.
Results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.
arXiv Detail & Related papers (2020-09-07T13:01:45Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z) - SUMO: Unbiased Estimation of Log Marginal Probability for Latent
Variable Models [80.22609163316459]
We introduce an unbiased estimator of the log marginal likelihood and its gradients for latent variable models based on randomized truncation of infinite series.
We show that models trained using our estimator give better test-set likelihoods than a standard importance-sampling based approach for the same average computational cost.
arXiv Detail & Related papers (2020-04-01T11:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.