Sampling on Metric Graphs
- URL: http://arxiv.org/abs/2512.02175v1
- Date: Mon, 01 Dec 2025 20:11:05 GMT
- Title: Sampling on Metric Graphs
- Authors: Rajat Vadiraj Dwaraknath, Lexing Ying,
- Abstract summary: We introduce the first algorithm for simulating Brownian motions on metric graphs.<n>We also obtain the first algorithm for sampling on metric graphs.<n>Our algorithm is able to run stable simulations with timesteps significantly larger than the stable limit of the finite volume method used in DuMuX.
- Score: 15.544139471623623
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Metric graphs are structures obtained by associating edges in a standard graph with segments of the real line and gluing these segments at the vertices of the graph. The resulting structure has a natural metric that allows for the study of differential operators and stochastic processes on the graph. Brownian motions in these domains have been extensively studied theoretically using their generators. However, less work has been done on practical algorithms for simulating these processes. We introduce the first algorithm for simulating Brownian motions on metric graphs through a timestep splitting Euler-Maruyama-based discretization of their corresponding stochastic differential equation. By applying this scheme to Langevin diffusions on metric graphs, we also obtain the first algorithm for sampling on metric graphs. We provide theoretical guarantees on the number of timestep splittings required for the algorithm to converge to the underlying stochastic process. We also show that the exit probabilities of the simulated particle converge to the vertex-edge jump probabilities of the underlying stochastic differential equation as the timestep goes to zero. Finally, since this method is highly parallelizable, we provide fast, memory-aware implementations of our algorithm in the form of custom CUDA kernels that are up to ~8000x faster than a GPU implementation using PyTorch on simple star metric graphs. Beyond simple star graphs, we benchmark our algorithm on a real cortical vascular network extracted from a DuMuX tissue-perfusion model for tracer transport. Our algorithm is able to run stable simulations with timesteps significantly larger than the stable limit of the finite volume method used in DuMuX while also achieving speedups of up to ~1500x.
Related papers
- SWING: Unlocking Implicit Graph Representations for Graph Random Features [57.956136773668476]
We propose SWING: Space Walks for Implicit Network Graphs, a new class of algorithms for computations involving Graph Random Features on graphs.<n>We provide detailed analysis of SWING and complement it with thorough experiments on different classes of i-graphs.
arXiv Detail & Related papers (2026-02-13T08:12:38Z) - Quantum Speedup for Sampling Random Spanning Trees [2.356162747014486]
We present a quantum algorithm for sampling random spanning trees from a weighted graph in $widetildeO(sqrtmn)$ time.<n>Our algorithm has sublinear runtime for dense graphs and achieves a quantum speedup over the best-known classical algorithm.
arXiv Detail & Related papers (2025-04-22T05:45:04Z) - Optimal Time Complexity Algorithms for Computing General Random Walk Graph Kernels on Sparse Graphs [14.049529046098607]
We present the first linear time complexity randomized algorithms for unbiased approximation of general random walk kernels (RWKs) for sparse graphs.
Our method is up to $mathbf27times$ faster than its counterparts for efficient computation on large graphs.
arXiv Detail & Related papers (2024-10-14T10:48:46Z) - Sparse Training of Discrete Diffusion Models for Graph Generation [45.103518022696996]
We introduce SparseDiff, a novel diffusion model based on the observation that almost all large graphs are sparse.
By selecting a subset of edges, SparseDiff effectively leverages sparse graph representations both during the noising process and within the denoising network.
Our model demonstrates state-of-the-art performance across multiple metrics on both small and large datasets.
arXiv Detail & Related papers (2023-11-03T16:50:26Z) - Scalable Bayesian Structure Learning for Gaussian Graphical Models Using Marginal Pseudo-likelihood [2.312692134587988]
We develop continuous-time (birth-death) and discrete-time (reversible jump) Markov chain Monte Carlo (MCMC) algorithms that efficiently explore the posterior over graph space.<n>The algorithms scale to large graph spaces, enabling parallel exploration for graphs with over 1,000 nodes.
arXiv Detail & Related papers (2023-06-30T20:37:40Z) - Efficient Graph Field Integrators Meet Point Clouds [59.27295475120132]
We present two new classes of algorithms for efficient field integration on graphs encoding point clouds.
The first class, SeparatorFactorization(SF), leverages the bounded genus of point cloud mesh graphs, while the second class, RFDiffusion(RFD), uses popular epsilon-nearest-neighbor graph representations for point clouds.
arXiv Detail & Related papers (2023-02-02T08:33:36Z) - Algorithms for perturbative analysis and simulation of quantum dynamics [0.0]
We develop general purpose algorithms for computing and utilizing both the Dyson series and Magnus expansion.
We demonstrate how to use these tools to approximate fidelity in a region of model parameter space.
We show how the pre-computation step can be phrased as a multivariable expansion problem with fewer terms than in the original method.
arXiv Detail & Related papers (2022-10-20T21:07:47Z) - Unfolding Projection-free SDP Relaxation of Binary Graph Classifier via
GDPA Linearization [59.87663954467815]
Algorithm unfolding creates an interpretable and parsimonious neural network architecture by implementing each iteration of a model-based algorithm as a neural layer.
In this paper, leveraging a recent linear algebraic theorem called Gershgorin disc perfect alignment (GDPA), we unroll a projection-free algorithm for semi-definite programming relaxation (SDR) of a binary graph.
Experimental results show that our unrolled network outperformed pure model-based graph classifiers, and achieved comparable performance to pure data-driven networks but using far fewer parameters.
arXiv Detail & Related papers (2021-09-10T07:01:15Z) - Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth
Nonlinear TD Learning [145.54544979467872]
We propose two single-timescale single-loop algorithms that require only one data point each step.
Our results are expressed in a form of simultaneous primal and dual side convergence.
arXiv Detail & Related papers (2020-08-23T20:36:49Z) - Online Dense Subgraph Discovery via Blurred-Graph Feedback [87.9850024070244]
We introduce a novel learning problem for dense subgraph discovery.
We first propose a edge-time algorithm that obtains a nearly-optimal solution with high probability.
We then design a more scalable algorithm with a theoretical guarantee.
arXiv Detail & Related papers (2020-06-24T11:37:33Z) - A Study of Performance of Optimal Transport [16.847501106437534]
We show that network simplex and augmenting path based algorithms can consistently outperform numerical matrix-scaling based methods.
We present a new algorithm that improves upon the classical Kuhn-Munkres algorithm.
arXiv Detail & Related papers (2020-05-03T20:37:05Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.