Graph Random Features for Scalable Gaussian Processes
- URL: http://arxiv.org/abs/2509.03691v2
- Date: Thu, 25 Sep 2025 22:24:05 GMT
- Title: Graph Random Features for Scalable Gaussian Processes
- Authors: Matthew Zhang, Jihao Andreas Lin, Krzysztof Choromanski, Adrian Weller, Richard E. Turner, Isaac Reid,
- Abstract summary: We study the application of graph random features (GRFs) to scalable Gaussian processes on discrete input spaces.<n>We prove that (under mild assumptions) Bayesian inference with GRFs enjoys $O(N3/2)$ time complexity with respect to the number of nodes $N$, compared to $O(N3)$ for exact kernels.
- Score: 52.89901965157282
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the application of graph random features (GRFs) - a recently introduced stochastic estimator of graph node kernels - to scalable Gaussian processes on discrete input spaces. We prove that (under mild assumptions) Bayesian inference with GRFs enjoys $O(N^{3/2})$ time complexity with respect to the number of nodes $N$, compared to $O(N^3)$ for exact kernels. Substantial wall-clock speedups and memory savings unlock Bayesian optimisation on graphs with over $10^6$ nodes on a single computer chip, whilst preserving competitive performance.
Related papers
- Tensor Sketch: Fast and Scalable Polynomial Kernel Approximation [19.363672064425504]
Approximation of non-linear kernels using random feature maps has become a powerful technique for scaling kernel methods to large datasets.<n>We provide theoretical guarantees on the approximation error, ensuring the fidelity of the resulting kernel function estimates.
arXiv Detail & Related papers (2025-05-13T00:47:17Z) - Optimal Time Complexity Algorithms for Computing General Random Walk Graph Kernels on Sparse Graphs [14.049529046098607]
We present the first linear time complexity randomized algorithms for unbiased approximation of general random walk kernels (RWKs) for sparse graphs.
Our method is up to $mathbf27times$ faster than its counterparts for efficient computation on large graphs.
arXiv Detail & Related papers (2024-10-14T10:48:46Z) - General Graph Random Features [42.75616308187867]
We propose a novel random walk-based algorithm for unbiased estimation of arbitrary functions of a weighted adjacency matrix.
Our algorithm enjoys subquadratic time complexity with respect to the number of nodes, overcoming the notoriously prohibitive cubic scaling of exact graph kernel evaluation.
arXiv Detail & Related papers (2023-10-07T15:47:31Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Taming graph kernels with random features [17.482280753348288]
We introduce the mechanism of graph random features (GRFs)
GRFs can be used to construct unbiased randomized estimators of several important kernels defined on graphs' nodes.
arXiv Detail & Related papers (2023-04-29T03:04:49Z) - On the Unlikelihood of D-Separation [69.62839677485087]
We provide analytic evidence that on large graphs, d-separation is a rare phenomenon, even when guaranteed to exist.
For the PC Algorithm, while it is known that its worst-case guarantees fail on non-sparse graphs, we show that the same is true for the average case.
For UniformSGS, while it is known that the running time is exponential for existing edges, we show that in the average case, that is the expected running time for most non-existing edges as well.
arXiv Detail & Related papers (2023-03-10T00:11:18Z) - Graph Neural Network Bandits [89.31889875864599]
We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
arXiv Detail & Related papers (2022-07-13T18:12:36Z) - Gaussian Process Bandit Optimization with Few Batches [49.896920704012395]
We introduce a batch algorithm inspired by finite-arm bandit algorithms.
We show that it achieves the cumulative regret upper bound $Oast(sqrtTgamma_T)$ using $O(loglog T)$ batches within time horizon $T$.
In addition, we propose a modified version of our algorithm, and characterize how the regret is impacted by the number of batches.
arXiv Detail & Related papers (2021-10-15T00:54:04Z) - Generating Target Graph Couplings for QAOA from Native Quantum Hardware
Couplings [3.2622301272834524]
We present methods for constructing any target coupling graph using limited global controls in an Ising-like quantum spin system.
Our approach is motivated by implementing the quantum approximate optimization algorithm (QAOA) on trapped ion quantum hardware.
Noisy simulations of Max-Cut QAOA show that our implementation is less susceptible to noise than the standard gate-based compilation.
arXiv Detail & Related papers (2020-11-16T18:43:09Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.