A Biased Graph Neural Network Sampler with Near-Optimal Regret
- URL: http://arxiv.org/abs/2103.01089v1
- Date: Mon, 1 Mar 2021 15:55:58 GMT
- Title: A Biased Graph Neural Network Sampler with Near-Optimal Regret
- Authors: Qingru Zhang, David Wipf, Quan Gan and Le Song
- Abstract summary: Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
- Score: 57.70126763759996
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNN) have recently emerged as a vehicle for applying
deep network architectures to graph and relational data. However, given the
increasing size of industrial datasets, in many practical situations, the
message passing computations required for sharing information across GNN layers
are no longer scalable. Although various sampling methods have been introduced
to approximate full-graph training within a tractable budget, there remain
unresolved complications such as high variances and limited theoretical
guarantees. To address these issues, we build upon existing work and treat GNN
neighbor sampling as a multi-armed bandit problem but with a newly-designed
reward function that introduces some degree of bias designed to reduce variance
and avoid unstable, possibly-unbounded payouts. And unlike prior bandit-GNN use
cases, the resulting policy leads to near-optimal regret while accounting for
the GNN training dynamics introduced by SGD. From a practical standpoint, this
translates into lower variance estimates and competitive or superior test
accuracy across several benchmarks.
Related papers
- Positional Encoder Graph Quantile Neural Networks for Geographic Data [4.277516034244117]
We introduce the Positional Graph Quantile Neural Network (PE-GQNN), a novel method that integrates PE-GNNs, Quantile Neural Networks, and recalibration techniques in a fully nonparametric framework.
Experiments on benchmark datasets demonstrate that PE-GQNN significantly outperforms existing state-of-the-art methods in both predictive accuracy and uncertainty quantification.
arXiv Detail & Related papers (2024-09-27T16:02:12Z) - Uncertainty in Graph Neural Networks: A Survey [50.63474656037679]
Graph Neural Networks (GNNs) have been extensively used in various real-world applications.
However, the predictive uncertainty of GNNs stemming from diverse sources can lead to unstable and erroneous predictions.
This survey aims to provide a comprehensive overview of the GNNs from the perspective of uncertainty.
arXiv Detail & Related papers (2024-03-11T21:54:52Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph
Neural Networks [40.95782849532316]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z) - GraN: An Efficient Gradient-Norm Based Detector for Adversarial and
Misclassified Examples [77.99182201815763]
Deep neural networks (DNNs) are vulnerable to adversarial examples and other data perturbations.
GraN is a time- and parameter-efficient method that is easily adaptable to any DNN.
GraN achieves state-of-the-art performance on numerous problem set-ups.
arXiv Detail & Related papers (2020-04-20T10:09:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.