Variational Inference in high-dimensional linear regression
- URL: http://arxiv.org/abs/2104.12232v1
- Date: Sun, 25 Apr 2021 19:09:38 GMT
- Title: Variational Inference in high-dimensional linear regression
- Authors: Sumit Mukherjee and Subhabrata Sen
- Abstract summary: We study high-dimensional Bayesian linear regression with product priors.
We provide sufficient conditions for the leading-order correctness of the naive mean-field approximation to the log-normalizing constant of the posterior distribution.
- Score: 5.065420156125522
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study high-dimensional Bayesian linear regression with product priors.
Using the nascent theory of non-linear large deviations (Chatterjee and
Dembo,2016), we derive sufficient conditions for the leading-order correctness
of the naive mean-field approximation to the log-normalizing constant of the
posterior distribution. Subsequently, assuming a true linear model for the
observed data, we derive a limiting infinite dimensional variational formula
for the log normalizing constant of the posterior. Furthermore, we establish
that under an additional "separation" condition, the variational problem has a
unique optimizer, and this optimizer governs the probabilistic properties of
the posterior distribution. We provide intuitive sufficient conditions for the
validity of this "separation" condition. Finally, we illustrate our results on
concrete examples with specific design matrices.
Related papers
- Quasi-Bayes meets Vines [2.3124143670964448]
We propose a different way to extend Quasi-Bayesian prediction to high dimensions through the use of Sklar's theorem.
We show that our proposed Quasi-Bayesian Vine (QB-Vine) is a fully non-parametric density estimator with emphan analytical form.
arXiv Detail & Related papers (2024-06-18T16:31:02Z) - Flat Minima in Linear Estimation and an Extended Gauss Markov Theorem [0.0]
We derive simple and explicit formulas for the optimal estimator in the cases of Nuclear and Spectral norms.
We analytically derive the generalization error in multiple random matrix ensembles, and compare with Ridge regression.
arXiv Detail & Related papers (2023-11-18T14:45:06Z) - Bayesian Renormalization [68.8204255655161]
We present a fully information theoretic approach to renormalization inspired by Bayesian statistical inference.
The main insight of Bayesian Renormalization is that the Fisher metric defines a correlation length that plays the role of an emergent RG scale.
We provide insight into how the Bayesian Renormalization scheme relates to existing methods for data compression and data generation.
arXiv Detail & Related papers (2023-05-17T18:00:28Z) - High-dimensional analysis of double descent for linear regression with
random projections [0.0]
We consider linear regression problems with a varying number of random projections, where we provably exhibit a double descent curve for a fixed prediction problem.
We first consider the ridge regression estimator and re-interpret earlier results using classical notions from non-parametric statistics.
We then compute equivalents of the generalization performance (in terms of bias and variance) of the minimum norm least-squares fit with random projections, providing simple expressions for the double descent phenomenon.
arXiv Detail & Related papers (2023-03-02T15:58:09Z) - Amortized backward variational inference in nonlinear state-space models [0.0]
We consider the problem of state estimation in general state-space models using variational inference.
We establish for the first time that, under mixing assumptions, the variational approximation of expectations of additive state functionals induces an error which grows at most linearly in the number of observations.
arXiv Detail & Related papers (2022-06-01T08:35:54Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Near-optimal inference in adaptive linear regression [60.08422051718195]
Even simple methods like least squares can exhibit non-normal behavior when data is collected in an adaptive manner.
We propose a family of online debiasing estimators to correct these distributional anomalies in at least squares estimation.
We demonstrate the usefulness of our theory via applications to multi-armed bandit, autoregressive time series estimation, and active learning with exploration.
arXiv Detail & Related papers (2021-07-05T21:05:11Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Understanding Implicit Regularization in Over-Parameterized Single Index
Model [55.41685740015095]
We design regularization-free algorithms for the high-dimensional single index model.
We provide theoretical guarantees for the induced implicit regularization phenomenon.
arXiv Detail & Related papers (2020-07-16T13:27:47Z) - Asymptotic Errors for Teacher-Student Convex Generalized Linear Models
(or : How to Prove Kabashima's Replica Formula) [23.15629681360836]
We prove an analytical formula for the reconstruction performance of convex generalized linear models.
We show that an analytical continuation may be carried out to extend the result to convex (non-strongly) problems.
We illustrate our claim with numerical examples on mainstream learning methods.
arXiv Detail & Related papers (2020-06-11T16:26:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.