Path convergence of Markov chains on large graphs
- URL: http://arxiv.org/abs/2308.09214v2
- Date: Sun, 15 Oct 2023 16:35:08 GMT
- Title: Path convergence of Markov chains on large graphs
- Authors: Siva Athreya, Soumik Pal, Raghav Somani, Raghavendra Tripathi
- Abstract summary: We show that as the size of a graph goes to infinity, the random trajectories of the processes converge to deterministic curves on the space of measure-valued graphons.
A novel feature of this approach is that it provides a precise exponential convergence rate for the Metropolis chain in a certain limiting regime.
- Score: 3.693375843298262
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider two classes of natural stochastic processes on finite unlabeled
graphs. These are Euclidean stochastic optimization algorithms on the adjacency
matrix of weighted graphs and a modified version of the Metropolis MCMC
algorithm on stochastic block models over unweighted graphs. In both cases we
show that, as the size of the graph goes to infinity, the random trajectories
of the stochastic processes converge to deterministic curves on the space of
measure-valued graphons. Measure-valued graphons, introduced by Lov\'{a}sz and
Szegedy in \cite{lovasz2010decorated}, are a refinement of the concept of
graphons that can distinguish between two infinite exchangeable arrays that
give rise to the same graphon limit. We introduce new metrics on this space
which provide us with a natural notion of convergence for our limit theorems.
This notion is equivalent to the convergence of infinite-exchangeable arrays.
Under suitable assumptions and a specified time-scaling, the Metropolis chain
admits a diffusion limit as the number of vertices go to infinity. We then
demonstrate that, in an appropriately formulated zero-noise limit, the
stochastic process of adjacency matrices of this diffusion converges to a
deterministic gradient flow curve on the space of graphons introduced
in\cite{Oh2023}. A novel feature of this approach is that it provides a precise
exponential convergence rate for the Metropolis chain in a certain limiting
regime. The connection between a natural Metropolis chain commonly used in
exponential random graph models and gradient flows on graphons, to the best of
our knowledge, is new in the literature as well.
Related papers
- Graph Generation via Spectral Diffusion [51.60814773299899]
We present GRASP, a novel graph generative model based on 1) the spectral decomposition of the graph Laplacian matrix and 2) a diffusion process.
Specifically, we propose to use a denoising model to sample eigenvectors and eigenvalues from which we can reconstruct the graph Laplacian and adjacency matrix.
Our permutation invariant model can also handle node features by concatenating them to the eigenvectors of each node.
arXiv Detail & Related papers (2024-02-29T09:26:46Z) - Generalized Graphon Process: Convergence of Graph Frequencies in
Stretched Cut Distance [30.279435887366578]
sparse graph sequences converge to the trivial graphon under the conventional definition of cut distance.
We utilize the concepts of generalized graphons and stretched cut distance to describe the convergence of sparse graph sequences.
Our results indicate the possibility of transfer learning between sparse graphs.
arXiv Detail & Related papers (2023-09-11T06:34:46Z) - Curve Your Attention: Mixed-Curvature Transformers for Graph
Representation Learning [77.1421343649344]
We propose a generalization of Transformers towards operating entirely on the product of constant curvature spaces.
We also provide a kernelized approach to non-Euclidean attention, which enables our model to run in time and memory cost linear to the number of nodes and edges.
arXiv Detail & Related papers (2023-09-08T02:44:37Z) - Graphon Pooling for Reducing Dimensionality of Signals and Convolutional
Operators on Graphs [131.53471236405628]
We present three methods that exploit the induced graphon representation of graphs and graph signals on partitions of [0, 1]2 in the graphon space.
We prove that those low dimensional representations constitute a convergent sequence of graphs and graph signals.
We observe that graphon pooling performs significantly better than other approaches proposed in the literature when dimensionality reduction ratios between layers are large.
arXiv Detail & Related papers (2022-12-15T22:11:34Z) - Stochastic optimization on matrices and a graphon McKean-Vlasov limit [26.906770707395832]
We consider gradient descents on the space of large symmetric matrices of suitable functions that are invariant under permuting the rows and columns using the same permutation.
We establish deterministic limits of these random curves as the dimensions of the matrices go to infinity while the entries remain bounded.
arXiv Detail & Related papers (2022-10-02T04:54:49Z) - High-dimensional limit theorems for SGD: Effective dynamics and critical
scaling [6.950316788263433]
We prove limit theorems for the trajectories of summary statistics of gradient descent (SGD)
We show a critical scaling regime for the step-size, below which the effective ballistic dynamics matches gradient flow for the population loss.
About the fixed points of this effective dynamics, the corresponding diffusive limits can be quite complex and even degenerate.
arXiv Detail & Related papers (2022-06-08T17:42:18Z) - Capturing Graphs with Hypo-Elliptic Diffusions [7.704064306361941]
We show that the distribution of random walks evolves according to a diffusion equation defined using the graph Laplacian.
This results in a novel tensor-valued graph operator, which we call the hypo-elliptic graph Laplacian.
We show that this method competes with graph transformers on datasets requiring long-range reasoning but scales only linearly in the number of edges.
arXiv Detail & Related papers (2022-05-27T16:47:34Z) - Score-based Generative Modeling of Graphs via the System of Stochastic
Differential Equations [57.15855198512551]
We propose a novel score-based generative model for graphs with a continuous-time framework.
We show that our method is able to generate molecules that lie close to the training distribution yet do not violate the chemical valency rule.
arXiv Detail & Related papers (2022-02-05T08:21:04Z) - Gradient flows on graphons: existence, convergence, continuity equations [27.562307342062354]
Wasserstein gradient flows on probability measures have found a host of applications in various optimization problems.
We show that the Euclidean gradient flow of a suitable function of the edge-weights converges to a novel continuum limit given by a curve on the space of graphons.
arXiv Detail & Related papers (2021-11-18T00:36:28Z) - Multilayer Graph Clustering with Optimized Node Embedding [70.1053472751897]
multilayer graph clustering aims at dividing the graph nodes into categories or communities.
We propose a clustering-friendly embedding of the layers of a given multilayer graph.
Experiments show that our method leads to a significant improvement.
arXiv Detail & Related papers (2021-03-30T17:36:40Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.