Graph Neural Network Bandits
- URL: http://arxiv.org/abs/2207.06456v1
- Date: Wed, 13 Jul 2022 18:12:36 GMT
- Title: Graph Neural Network Bandits
- Authors: Parnian Kassraie, Andreas Krause, Ilija Bogunovic
- Abstract summary: We consider the bandit optimization problem with the reward function defined over graph-structured data.
Key challenges in this setting are scaling to large domains, and to graphs with many nodes.
We show that graph neural networks (GNNs) can be used to estimate the reward function.
- Score: 89.31889875864599
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider the bandit optimization problem with the reward function defined
over graph-structured data.
This problem has important applications in molecule design and drug
discovery, where the reward is naturally invariant to graph permutations.
The key challenges in this setting are scaling to large domains, and to
graphs with many nodes.
We resolve these challenges by embedding the permutation invariance into our
model.
In particular, we show that graph neural networks (GNNs) can be used to
estimate the reward function, assuming it resides in the Reproducing Kernel
Hilbert Space of a permutation-invariant additive kernel.
By establishing a novel connection between such kernels and the graph neural
tangent kernel (GNTK), we introduce the first GNN confidence bound and use it
to design a phased-elimination algorithm with sublinear regret.
Our regret bound depends on the GNTK's maximum information gain, which we
also provide a bound for.
While the reward function depends on all $N$ node features, our guarantees
are independent of the number of graph nodes $N$.
Empirically, our approach exhibits competitive performance and scales well on
graph-structured domains.
Related papers
- Graph Neural Thompson Sampling [18.83205413952483]
We consider an online decision-making problem with a reward function defined over graph-structured data.
We then propose textttGNN-TS, a Graph Neural Network powered Thompson Sampling (TS) algorithm which employs a GNN approximator for estimating the mean reward function.
arXiv Detail & Related papers (2024-06-15T16:45:27Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - Robust Graph Neural Network based on Graph Denoising [10.564653734218755]
Graph Neural Networks (GNNs) have emerged as a notorious alternative to address learning problems dealing with non-Euclidean datasets.
This work proposes a robust implementation of GNNs that explicitly accounts for the presence of perturbations in the observed topology.
arXiv Detail & Related papers (2023-12-11T17:43:57Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Graph Neural Networks with Local Graph Parameters [1.8600631687568656]
Local graph parameters can be added to any Graph Neural Networks (GNNs) architecture.
Our results connect GNNs with deep results in finite model theory and finite variable logics.
arXiv Detail & Related papers (2021-06-12T07:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.