Metropolis Monte Carlo sampling: convergence, localization transition
and optimality
- URL: http://arxiv.org/abs/2207.10488v4
- Date: Sat, 15 Apr 2023 22:35:54 GMT
- Title: Metropolis Monte Carlo sampling: convergence, localization transition
and optimality
- Authors: Alexei D. Chepelianskii, Satya N. Majumdar, Hendrik Schawe and
Emmanuel Trizac
- Abstract summary: We show that deviations from the target steady-state distribution can feature a localization transition.
We argue that the relaxation before and after the localisation transition is respectively limited by diffusion and rejection rates.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Among random sampling methods, Markov Chain Monte Carlo algorithms are
foremost. Using a combination of analytical and numerical approaches, we study
their convergence properties towards the steady state, within a random walk
Metropolis scheme. Analysing the relaxation properties of some model algorithms
sufficiently simple to enable analytic progress, we show that the deviations
from the target steady-state distribution can feature a localization transition
as a function of the characteristic length of the attempted jumps defining the
random walk. While the iteration of the Monte Carlo algorithm converges to
equilibrium for all choices of jump parameters, the localization transition
changes drastically the asymptotic shape of the difference between the
probability distribution reached after a finite number of steps of the
algorithm and the target equilibrium distribution. We argue that the relaxation
before and after the localisation transition is respectively limited by
diffusion and rejection rates.
Related papers
- Convergence rate of random scan Coordinate Ascent Variational Inference under log-concavity [0.18416014644193066]
The Coordinate Ascent Variational Inference scheme is a popular algorithm used to compute the mean-field approximation of a probability distribution of interest.
We analyze its random scan version, under log-concavity assumptions on the target density.
arXiv Detail & Related papers (2024-06-11T14:23:01Z) - Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Robust scalable initialization for Bayesian variational inference with
multi-modal Laplace approximations [0.0]
Variational mixtures with full-covariance structures suffer from a quadratic growth due to variational parameters with the number of parameters.
We propose a method for constructing an initial Gaussian model approximation that can be used to warm-start variational inference.
arXiv Detail & Related papers (2023-07-12T19:30:04Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Determination of the critical exponents in dissipative phase
transitions: Coherent anomaly approach [51.819912248960804]
We propose a generalization of the coherent anomaly method to extract the critical exponents of a phase transition occurring in the steady-state of an open quantum many-body system.
arXiv Detail & Related papers (2021-03-12T13:16:18Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - A Distributional Analysis of Sampling-Based Reinforcement Learning
Algorithms [67.67377846416106]
We present a distributional approach to theoretical analyses of reinforcement learning algorithms for constant step-sizes.
We show that value-based methods such as TD($lambda$) and $Q$-Learning have update rules which are contractive in the space of distributions of functions.
arXiv Detail & Related papers (2020-03-27T05:13:29Z) - Stochastic Normalizing Flows [2.323220706791067]
We show that normalizing flows can be used to learn the transformation of a simple prior distribution.
We derive an efficient training procedure by which both the sampler's and the flow's parameters can be optimized end-to-end.
We illustrate the representational power, sampling efficiency and correctness of SNFs on several benchmarks including applications to molecular sampling systems in equilibrium.
arXiv Detail & Related papers (2020-02-16T23:29:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.