Variable fusion for Bayesian linear regression via spike-and-slab priors
- URL: http://arxiv.org/abs/2003.13299v3
- Date: Wed, 2 Dec 2020 09:06:39 GMT
- Title: Variable fusion for Bayesian linear regression via spike-and-slab priors
- Authors: Shengyi Wu, Kaito Shimamura, Kohei Yoshikawa, Kazuaki Murayama,
Shuichi Kawano
- Abstract summary: This paper presents a novel variable fusion method in terms of Bayesian linear regression models.
A spike-and-slab prior is tailored to perform variable fusion.
Simulation studies and a real data analysis show that our proposed method achieves better performance than previous methods.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In linear regression models, fusion of coefficients is used to identify
predictors having similar relationships with a response. This is called
variable fusion. This paper presents a novel variable fusion method in terms of
Bayesian linear regression models. We focus on hierarchical Bayesian models
based on a spike-and-slab prior approach. A spike-and-slab prior is tailored to
perform variable fusion. To obtain estimates of the parameters, we develop a
Gibbs sampler for the parameters. Simulation studies and a real data analysis
show that our proposed method achieves better performance than previous
methods.
Related papers
- von Mises Quasi-Processes for Bayesian Circular Regression [57.88921637944379]
We explore a family of expressive and interpretable distributions over circle-valued random functions.
The resulting probability model has connections with continuous spin models in statistical physics.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Markov Chain Monte Carlo sampling.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Overparameterized Multiple Linear Regression as Hyper-Curve Fitting [0.0]
It is proven that a linear model will produce exact predictions even in the presence of nonlinear dependencies that violate the model assumptions.
The hyper-curve approach is especially suited for the regularization of problems with noise in predictor variables and can be used to remove noisy and "improper" predictors from the model.
arXiv Detail & Related papers (2024-04-11T15:43:11Z) - Diffusion posterior sampling for simulation-based inference in tall data settings [53.17563688225137]
Simulation-based inference ( SBI) is capable of approximating the posterior distribution that relates input parameters to a given observation.
In this work, we consider a tall data extension in which multiple observations are available to better infer the parameters of the model.
We compare our method to recently proposed competing approaches on various numerical experiments and demonstrate its superiority in terms of numerical stability and computational cost.
arXiv Detail & Related papers (2024-04-11T09:23:36Z) - Fusion of Gaussian Processes Predictions with Monte Carlo Sampling [61.31380086717422]
In science and engineering, we often work with models designed for accurate prediction of variables of interest.
Recognizing that these models are approximations of reality, it becomes desirable to apply multiple models to the same data and integrate their outcomes.
arXiv Detail & Related papers (2024-03-03T04:21:21Z) - Monte Carlo inference for semiparametric Bayesian regression [5.488491124945426]
This paper introduces a simple, general, and efficient strategy for joint posterior inference of an unknown transformation and all regression model parameters.
It delivers (1) joint posterior consistency under general conditions, including multiple model misspecifications, and (2) efficient Monte Carlo (not Markov chain Monte Carlo) inference for the transformation and all parameters for important special cases.
arXiv Detail & Related papers (2023-06-08T18:42:42Z) - Prior Density Learning in Variational Bayesian Phylogenetic Parameters
Inference [1.03590082373586]
We propose an approach to relax the rigidity of the prior densities by learning their parameters using a gradient-based method and a neural network-based parameterization.
The results of performed simulations show that the approach is powerful in estimating branch lengths and evolutionary model parameters.
arXiv Detail & Related papers (2023-02-06T01:29:15Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - A flexible empirical Bayes approach to multiple linear regression and connections with penalized regression [8.663322701649454]
We introduce a new empirical Bayes approach for large-scale multiple linear regression.
Our approach combines two key ideas: the use of flexible "adaptive shrinkage" priors and variational approximations.
We show that the posterior mean from our method solves a penalized regression problem.
arXiv Detail & Related papers (2022-08-23T12:42:57Z) - Time varying regression with hidden linear dynamics [74.9914602730208]
We revisit a model for time-varying linear regression that assumes the unknown parameters evolve according to a linear dynamical system.
Counterintuitively, we show that when the underlying dynamics are stable the parameters of this model can be estimated from data by combining just two ordinary least squares estimates.
arXiv Detail & Related papers (2021-12-29T23:37:06Z) - Conjugate priors for count and rounded data regression [0.0]
We introduce conjugate priors that enable closed-form posterior inference.
Key posterior and predictive functionals are computable analytically or via direct Monte Carlo simulation.
These tools are broadly useful for linear regression, nonlinear models via basis expansions, and model and variable selection.
arXiv Detail & Related papers (2021-10-23T23:26:01Z) - A Hypergradient Approach to Robust Regression without Correspondence [85.49775273716503]
We consider a variant of regression problem, where the correspondence between input and output data is not available.
Most existing methods are only applicable when the sample size is small.
We propose a new computational framework -- ROBOT -- for the shuffled regression problem.
arXiv Detail & Related papers (2020-11-30T21:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.