A Parallel Evolutionary Multiple-Try Metropolis Markov Chain Monte Carlo
Algorithm for Sampling Spatial Partitions
- URL: http://arxiv.org/abs/2007.11461v1
- Date: Wed, 22 Jul 2020 14:28:44 GMT
- Title: A Parallel Evolutionary Multiple-Try Metropolis Markov Chain Monte Carlo
Algorithm for Sampling Spatial Partitions
- Authors: Wendy K. Tam Cho and Yan Y. Liu
- Abstract summary: We develop an Evolutionary Markov Chain Monte Carlo (EMCMC) algorithm for sampling spatial partitions that lie within a spatial state space.
Our algorithm combines the advantages of evolutionary algorithms (EAs) as a large and complex state space traversal and the theoretical convergence properties of Markov Chain Monte Carlo algorithms for sampling from unknown distributions.
We further expand the reach of our EMCMC algorithm by harnessing the computational power afforded by massively parallel architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an Evolutionary Markov Chain Monte Carlo (EMCMC) algorithm for
sampling spatial partitions that lie within a large and complex spatial state
space. Our algorithm combines the advantages of evolutionary algorithms (EAs)
as optimization heuristics for state space traversal and the theoretical
convergence properties of Markov Chain Monte Carlo algorithms for sampling from
unknown distributions. Local optimality information that is identified via a
directed search by our optimization heuristic is used to adaptively update a
Markov chain in a promising direction within the framework of a Multiple-Try
Metropolis Markov Chain model that incorporates a generalized
Metropolis-Hasting ratio. We further expand the reach of our EMCMC algorithm by
harnessing the computational power afforded by massively parallel architecture
through the integration of a parallel EA framework that guides Markov chains
running in parallel.
Related papers
- Ant Colony Sampling with GFlowNets for Combinatorial Optimization [68.84985459701007]
Generative Flow Ant Colony Sampler (GFACS) is a novel meta-heuristic method that hierarchically combines amortized inference and parallel search.
Our method first leverages Generative Flow Networks (GFlowNets) to amortize a multi-modal prior distribution over a solution space.
arXiv Detail & Related papers (2024-03-11T16:26:06Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Improving sample efficiency of high dimensional Bayesian optimization
with MCMC [7.241485121318798]
We propose a new method based on Markov Chain Monte Carlo to efficiently sample from an approximated posterior.
We show experimentally that both the Metropolis-Hastings and the Langevin Dynamics version of our algorithm outperform state-of-the-art methods in high-dimensional sequential optimization and reinforcement learning benchmarks.
arXiv Detail & Related papers (2024-01-05T05:56:42Z) - Quick Adaptive Ternary Segmentation: An Efficient Decoding Procedure For
Hidden Markov Models [70.26374282390401]
Decoding the original signal (i.e., hidden chain) from the noisy observations is one of the main goals in nearly all HMM based data analyses.
We present Quick Adaptive Ternary (QATS), a divide-and-conquer procedure which decodes the hidden sequence in polylogarithmic computational complexity.
arXiv Detail & Related papers (2023-05-29T19:37:48Z) - Importance is Important: Generalized Markov Chain Importance Sampling Methods [4.611170084430822]
We show that for any multiple-try Metropolis algorithm, one can always accept the proposal and evaluate the importance weight that is needed to correct for the bias without extra computational cost.
We propose an alternative MCMC sampler on discrete spaces that is also outside the Metropolis--Hastings framework.
arXiv Detail & Related papers (2023-04-13T04:04:09Z) - Variational Combinatorial Sequential Monte Carlo Methods for Bayesian
Phylogenetic Inference [4.339931151475307]
We introduce Vari Combinatorial Monte Carlo (VCSMC), a powerful framework that establishes variational search to learn over intricate structures.
We show that VCSMC and CSMC are efficient and explore higher probability spaces than existing methods on a range of tasks.
arXiv Detail & Related papers (2021-05-31T19:44:24Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Accelerating MCMC algorithms through Bayesian Deep Networks [7.054093620465401]
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
As the dimension of the distribution gets larger, the computational costs for a satisfactory exploration of the sampling space become challenging.
We show an alternative way of performing adaptive MCMC, by using the outcome of Bayesian Neural Networks as the initial proposal for the Markov Chain.
arXiv Detail & Related papers (2020-11-29T04:29:00Z) - A Dynamical Systems Approach for Convergence of the Bayesian EM
Algorithm [59.99439951055238]
We show how (discrete-time) Lyapunov stability theory can serve as a powerful tool to aid, or even lead, in the analysis (and potential design) of optimization algorithms that are not necessarily gradient-based.
The particular ML problem that this paper focuses on is that of parameter estimation in an incomplete-data Bayesian framework via the popular optimization algorithm known as maximum a posteriori expectation-maximization (MAP-EM)
We show that fast convergence (linear or quadratic) is achieved, which could have been difficult to unveil without our adopted S&C approach.
arXiv Detail & Related papers (2020-06-23T01:34:18Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.