A Two-step Metropolis Hastings Method for Bayesian Empirical Likelihood
Computation with Application to Bayesian Model Selection
- URL: http://arxiv.org/abs/2209.01269v1
- Date: Fri, 2 Sep 2022 20:40:21 GMT
- Title: A Two-step Metropolis Hastings Method for Bayesian Empirical Likelihood
Computation with Application to Bayesian Model Selection
- Authors: Sanjay Chaudhuri and Teng Yin
- Abstract summary: We propose a two-step reversible Monte Carlo algorithm to sample from a set posterior of a Markov chain.
We also discuss empirical likelihood and extend our two-step reversible Monte Carlo algorithm to a jump chain procedure to sample from the resulting set.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent times empirical likelihood has been widely applied under Bayesian
framework. Markov chain Monte Carlo (MCMC) methods are frequently employed to
sample from the posterior distribution of the parameters of interest. However,
complex, especially non-convex nature of the likelihood support erects enormous
hindrances in choosing an appropriate MCMC algorithm. Such difficulties have
restricted the use of Bayesian empirical likelihood (BayesEL) based methods in
many applications. In this article, we propose a two-step Metropolis Hastings
algorithm to sample from the BayesEL posteriors. Our proposal is specified
hierarchically, where the estimating equations determining the empirical
likelihood are used to propose values of a set of parameters depending on the
proposed values of the remaining parameters. Furthermore, we discuss Bayesian
model selection using empirical likelihood and extend our two-step Metropolis
Hastings algorithm to a reversible jump Markov chain Monte Carlo procedure to
sample from the resulting posterior. Finally, several applications of our
proposed methods are presented.
Related papers
- Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts [64.34482582690927]
We provide an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models.
We propose Sequential Monte Carlo (SMC) resampling algorithms that leverage inference-time scaling to improve sampling quality.
arXiv Detail & Related papers (2025-03-04T17:46:51Z) - Optimality in importance sampling: a gentle survey [50.79602839359522]
The performance of Monte Carlo sampling methods relies on the crucial choice of a proposal density.
This work is an exhaustive review around the concept of optimality in importance sampling.
arXiv Detail & Related papers (2025-02-11T09:23:26Z) - Semiparametric Bayesian Difference-in-Differences [2.458652618559425]
We study semiparametric Bayesian inference for the average treatment effect on the treated (ATT) within the difference-in-differences (DiD) research design.<n>We propose two new Bayesian methods with frequentist validity.
arXiv Detail & Related papers (2024-12-05T20:41:36Z) - Bayesian Circular Regression with von Mises Quasi-Processes [57.88921637944379]
In this work we explore a family of expressive and interpretable distributions over circle-valued random functions.
For posterior inference, we introduce a new Stratonovich-like augmentation that lends itself to fast Gibbs sampling.
We present experiments applying this model to the prediction of wind directions and the percentage of the running gait cycle as a function of joint angles.
arXiv Detail & Related papers (2024-06-19T01:57:21Z) - Markov chain Monte Carlo without evaluating the target: an auxiliary variable approach [9.426953273977496]
In sampling tasks, it is common for target distributions to be known up to a normalizing constant.
In many situations, even evaluating the unnormalized distribution can be costly or infeasible.
We develop a novel framework that allows the use of auxiliary variables in both the proposal and the acceptance-rejection step.
arXiv Detail & Related papers (2024-06-07T20:06:23Z) - Fast post-process Bayesian inference with Variational Sparse Bayesian Quadrature [13.36200518068162]
We propose the framework of post-process Bayesian inference as a means to obtain a quick posterior approximation from existing target density evaluations.
Within this framework, we introduce Variational Sparse Bayesian Quadrature (VSBQ), a method for post-process approximate inference for models with black-box and potentially noisy likelihoods.
We validate our method on challenging synthetic scenarios and real-world applications from computational neuroscience.
arXiv Detail & Related papers (2023-03-09T13:58:35Z) - Sparse high-dimensional linear regression with a partitioned empirical
Bayes ECM algorithm [62.997667081978825]
We propose a computationally efficient and powerful Bayesian approach for sparse high-dimensional linear regression.
Minimal prior assumptions on the parameters are used through the use of plug-in empirical Bayes estimates.
The proposed approach is implemented in the R package probe.
arXiv Detail & Related papers (2022-09-16T19:15:50Z) - elhmc: An R Package for Hamiltonian Monte Carlo Sampling in Bayesian
Empirical Likelihood [0.0]
We describe a package for sampling from an empirical likelihood-based posterior using a Hamiltonian Carlo method.
An MCMC sample is drawn from the BayesEL posterior of the parameters, with various details required by the user.
arXiv Detail & Related papers (2022-09-02T22:22:16Z) - Langevin Monte Carlo for Contextual Bandits [72.00524614312002]
Langevin Monte Carlo Thompson Sampling (LMC-TS) is proposed to directly sample from the posterior distribution in contextual bandits.
We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits.
arXiv Detail & Related papers (2022-06-22T17:58:23Z) - Diversified Sampling for Batched Bayesian Optimization with
Determinantal Point Processes [48.09817971375995]
We introduce DPP-Batch Bayesian Optimization (DPP-BBO), a universal framework for inducing batch diversity in sampling based BO.
We illustrate this framework by formulating DPP-Thompson Sampling (DPP-TS) as a variant of the popular Thompson Sampling (TS) algorithm and introducing a Markov Chain Monte Carlo procedure to sample.
arXiv Detail & Related papers (2021-10-22T08:51:28Z) - Bayesian decision-making under misspecified priors with applications to
meta-learning [64.38020203019013]
Thompson sampling and other sequential decision-making algorithms are popular approaches to tackle explore/exploit trade-offs in contextual bandits.
We show that performance degrades gracefully with misspecified priors.
arXiv Detail & Related papers (2021-07-03T23:17:26Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Uncertainty-Aware Abstractive Summarization [3.1423034006764965]
We propose a novel approach to summarization based on Bayesian deep learning.
We show that our variational equivalents of BART and PEG can outperform their deterministic counterparts on multiple benchmark datasets.
Having a reliable uncertainty measure, we can improve the experience of the end user by filtering generated summaries of high uncertainty.
arXiv Detail & Related papers (2021-05-21T06:36:40Z) - Approximate Bayesian inference from noisy likelihoods with Gaussian
process emulated MCMC [0.24275655667345403]
We model the log-likelihood function using a Gaussian process (GP)
The main methodological innovation is to apply this model to emulate the progression that an exact Metropolis-Hastings (MH) sampler would take.
The resulting approximate sampler is conceptually simple and sample-efficient.
arXiv Detail & Related papers (2021-04-08T17:38:02Z) - Pathwise Conditioning of Gaussian Processes [72.61885354624604]
Conventional approaches for simulating Gaussian process posteriors view samples as draws from marginal distributions of process values at finite sets of input locations.
This distribution-centric characterization leads to generative strategies that scale cubically in the size of the desired random vector.
We show how this pathwise interpretation of conditioning gives rise to a general family of approximations that lend themselves to efficiently sampling Gaussian process posteriors.
arXiv Detail & Related papers (2020-11-08T17:09:37Z) - Bayesian learning of orthogonal embeddings for multi-fidelity Gaussian
Processes [3.564709604457361]
"Projection" mapping consists of an orthonormal matrix that is considered a priori unknown and needs to be inferred jointly with the GP parameters.
We extend the proposed framework to multi-fidelity models using GPs including the scenarios of training multiple outputs together.
The benefits of our proposed framework, are illustrated on the computationally challenging three-dimensional aerodynamic optimization of a last-stage blade for an industrial gas turbine.
arXiv Detail & Related papers (2020-08-05T22:28:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.