Bayesian Analysis for Over-parameterized Linear Model without Sparsity
- URL: http://arxiv.org/abs/2305.15754v2
- Date: Wed, 13 Mar 2024 12:29:18 GMT
- Title: Bayesian Analysis for Over-parameterized Linear Model without Sparsity
- Authors: Tomoya Wakayama, Masaaki Imaizumi
- Abstract summary: This study introduces a Bayesian approach that employs a prior distribution dependent on the eigenvectors of data covariance matrices without inducing parameter sparsity.
We also provide contraction rates of the derived posterior estimation and develop a truncated Gaussian approximation of the posterior distribution.
These findings suggest that Bayesian methods capable of handling data spectra and estimating non-sparse high-dimensional parameters are feasible.
- Score: 8.1585306387285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the field of high-dimensional Bayesian statistics, a plethora of
methodologies have been developed, including various prior distributions that
result in parameter sparsity. However, such priors exhibit limitations in
handling the spectral eigenvector structure of data, rendering estimations less
effective for analyzing the over-parameterized models (high-dimensional linear
models that do not assume sparsity) developed in recent years. This study
introduces a Bayesian approach that employs a prior distribution dependent on
the eigenvectors of data covariance matrices without inducing parameter
sparsity. We also provide contraction rates of the derived posterior estimation
and develop a truncated Gaussian approximation of the posterior distribution.
The former demonstrates the efficiency of posterior estimation, whereas the
latter facilitates the uncertainty quantification of parameters via a
Bernstein--von Mises-type approach. These findings suggest that Bayesian
methods capable of handling data spectra and estimating non-sparse
high-dimensional parameters are feasible.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Multivariate root-n-consistent smoothing parameter free matching estimators and estimators of inverse density weighted expectations [51.000851088730684]
We develop novel modifications of nearest-neighbor and matching estimators which converge at the parametric $sqrt n $-rate.
We stress that our estimators do not involve nonparametric function estimators and in particular do not rely on sample-size dependent parameters smoothing.
arXiv Detail & Related papers (2024-07-11T13:28:34Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Bayesian Inference for Consistent Predictions in Overparameterized Nonlinear Regression [0.0]
This study explores the predictive properties of over parameterized nonlinear regression within the Bayesian framework.
Posterior contraction is established for generalized linear and single-neuron models with Lipschitz continuous activation functions.
The proposed method was validated via numerical simulations and a real data application.
arXiv Detail & Related papers (2024-04-06T04:22:48Z) - Spectral Estimators for Structured Generalized Linear Models via Approximate Message Passing [28.91482208876914]
We consider the problem of parameter estimation in a high-dimensional generalized linear model.
Despite their wide use, a rigorous performance characterization, as well as a principled way to preprocess the data, are available only for unstructured designs.
arXiv Detail & Related papers (2023-08-28T11:49:23Z) - Sparse Horseshoe Estimation via Expectation-Maximisation [2.1485350418225244]
We propose a novel expectation-maximisation (EM) procedure for computing the MAP estimates of the parameters.
A particular strength of our approach is that the M-step depends only on the form of the prior and it is independent of the form of the likelihood.
In experiments performed on simulated and real data, our approach performs comparable, or superior to, state-of-the-art sparse estimation methods.
arXiv Detail & Related papers (2022-11-07T00:43:26Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A likelihood approach to nonparametric estimation of a singular
distribution using deep generative models [4.329951775163721]
We investigate a likelihood approach to nonparametric estimation of a singular distribution using deep generative models.
We prove that a novel and effective solution exists by perturbing the data with an instance noise.
We also characterize the class of distributions that can be efficiently estimated via deep generative models.
arXiv Detail & Related papers (2021-05-09T23:13:58Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Nonparametric Score Estimators [49.42469547970041]
Estimating the score from a set of samples generated by an unknown distribution is a fundamental task in inference and learning of probabilistic models.
We provide a unifying view of these estimators under the framework of regularized nonparametric regression.
We propose score estimators based on iterative regularization that enjoy computational benefits from curl-free kernels and fast convergence.
arXiv Detail & Related papers (2020-05-20T15:01:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.