Convergence of Dirichlet Forms for MCMC Optimal Scaling with Dependent
Target Distributions on Large Graphs
- URL: http://arxiv.org/abs/2210.17042v2
- Date: Thu, 22 Jun 2023 00:17:53 GMT
- Title: Convergence of Dirichlet Forms for MCMC Optimal Scaling with Dependent
Target Distributions on Large Graphs
- Authors: Ning Ning
- Abstract summary: Markov chain Monte Carlo (MCMC) algorithms have played a significant role in statistics, physics, machine learning and others.
The random walk Metropolis (RWM) algorithm as the most classical MCMC algorithm, has had a great influence on the development and practice of science and engineering.
In this paper, we utilize the Mosco convergence of Dirichlet forms in analyzing the RWM algorithm on large graphs.
- Score: 0.6599344783327054
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Markov chain Monte Carlo (MCMC) algorithms have played a significant role in
statistics, physics, machine learning and others, and they are the only known
general and efficient approach for some high-dimensional problems. The random
walk Metropolis (RWM) algorithm as the most classical MCMC algorithm, has had a
great influence on the development and practice of science and engineering. The
behavior of the RWM algorithm in high-dimensional problems is typically
investigated through a weak convergence result of diffusion processes. In this
paper, we utilize the Mosco convergence of Dirichlet forms in analyzing the RWM
algorithm on large graphs, whose target distribution is the Gibbs measure that
includes any probability measure satisfying a Markov property. The abstract and
powerful theory of Dirichlet forms allows us to work directly and naturally on
the infinite-dimensional space, and our notion of Mosco convergence allows
Dirichlet forms associated with the RWM chains to lie on changing Hilbert
spaces. Through the optimal scaling problem, we demonstrate the impressive
strengths of the Dirichlet form approach over the standard diffusion approach.
Related papers
- Bellman Diffusion: Generative Modeling as Learning a Linear Operator in the Distribution Space [72.52365911990935]
We introduce Bellman Diffusion, a novel DGM framework that maintains linearity in MDPs through gradient and scalar field modeling.
Our results show that Bellman Diffusion achieves accurate field estimations and is a capable image generator, converging 1.5x faster than the traditional histogram-based baseline in distributional RL tasks.
arXiv Detail & Related papers (2024-10-02T17:53:23Z) - Diffusion Models as Stochastic Quantization in Lattice Field Theory [7.221319972004889]
We establish a direct connection between generative diffusion models (DMs) and quantization (SQ).
The DM is realized by approximating the reversal of a process dictated by the Langevin equation, generating samples from a prior distribution to effectively mimic the target distribution.
We demonstrate that DMs can notably reduce autocorrelation times in the Markov chain, especially in the critical region where standard Markov Chain Monte-Carlo algorithms experience critical slowing down.
arXiv Detail & Related papers (2023-09-29T09:26:59Z) - Learning Graphical Factor Models with Riemannian Optimization [70.13748170371889]
This paper proposes a flexible algorithmic framework for graph learning under low-rank structural constraints.
The problem is expressed as penalized maximum likelihood estimation of an elliptical distribution.
We leverage geometries of positive definite matrices and positive semi-definite matrices of fixed rank that are well suited to elliptical models.
arXiv Detail & Related papers (2022-10-21T13:19:45Z) - The Dynamics of Riemannian Robbins-Monro Algorithms [101.29301565229265]
We propose a family of Riemannian algorithms generalizing and extending the seminal approximation framework of Robbins and Monro.
Compared to their Euclidean counterparts, Riemannian algorithms are much less understood due to lack of a global linear structure on the manifold.
We provide a general template of almost sure convergence results that mirrors and extends the existing theory for Euclidean Robbins-Monro schemes.
arXiv Detail & Related papers (2022-06-14T12:30:11Z) - Lagrangian Manifold Monte Carlo on Monge Patches [5.586191108738564]
We show how Lagrangian Monte Carlo in this metric efficiently explores the target distributions.
Our metric only requires first-order information and has fast inverse and determinants.
arXiv Detail & Related papers (2022-02-01T21:01:22Z) - Machine Learning and Variational Algorithms for Lattice Field Theory [1.198562319289569]
In lattice quantum field theory studies, parameters defining the lattice theory must be tuned toward criticality to access continuum physics.
We introduce an approach to "deform" Monte Carlo estimators based on contour deformations applied to the domain of the path integral.
We demonstrate that flow-based MCMC can mitigate critical slowing down and observifolds can exponentially reduce variance in proof-of-principle applications.
arXiv Detail & Related papers (2021-06-03T16:37:05Z) - What Are Bayesian Neural Network Posteriors Really Like? [63.950151520585024]
We show that Hamiltonian Monte Carlo can achieve significant performance gains over standard and deep ensembles.
We also show that deep distributions are similarly close to HMC as standard SGLD, and closer than standard variational inference.
arXiv Detail & Related papers (2021-04-29T15:38:46Z) - Joint Network Topology Inference via Structured Fusion Regularization [70.30364652829164]
Joint network topology inference represents a canonical problem of learning multiple graph Laplacian matrices from heterogeneous graph signals.
We propose a general graph estimator based on a novel structured fusion regularization.
We show that the proposed graph estimator enjoys both high computational efficiency and rigorous theoretical guarantee.
arXiv Detail & Related papers (2021-03-05T04:42:32Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - Accelerating MCMC algorithms through Bayesian Deep Networks [7.054093620465401]
Markov Chain Monte Carlo (MCMC) algorithms are commonly used for their versatility in sampling from complicated probability distributions.
As the dimension of the distribution gets larger, the computational costs for a satisfactory exploration of the sampling space become challenging.
We show an alternative way of performing adaptive MCMC, by using the outcome of Bayesian Neural Networks as the initial proposal for the Markov Chain.
arXiv Detail & Related papers (2020-11-29T04:29:00Z) - Demystifying Orthogonal Monte Carlo and Beyond [20.745014324028386]
Orthogonal Monte Carlo (OMC) is a very effective sampling algorithm imposing structural geometric conditions (orthogonality) on samples for variance reduction.
We shed new light on the theoretical principles behind OMC, applying theory of negatively dependent random variables to obtain several new concentration results.
We propose a novel extensions of the method leveraging number theory techniques and particle algorithms, called Near-Orthogonal Monte Carlo (NOMC)
arXiv Detail & Related papers (2020-05-27T18:44:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.