Bayesian Graph Neural Networks with Adaptive Connection Sampling
- URL: http://arxiv.org/abs/2006.04064v3
- Date: Tue, 30 Jun 2020 22:59:55 GMT
- Title: Bayesian Graph Neural Networks with Adaptive Connection Sampling
- Authors: Arman Hasanzadeh, Ehsan Hajiramezanali, Shahin Boluki, Mingyuan Zhou,
Nick Duffield, Krishna Narayanan, Xiaoning Qian
- Abstract summary: We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
- Score: 62.51689735630133
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a unified framework for adaptive connection sampling in graph
neural networks (GNNs) that generalizes existing stochastic regularization
methods for training GNNs. The proposed framework not only alleviates
over-smoothing and over-fitting tendencies of deep GNNs, but also enables
learning with uncertainty in graph analytic tasks with GNNs. Instead of using
fixed sampling rates or hand-tuning them as model hyperparameters in existing
stochastic regularization methods, our adaptive connection sampling can be
trained jointly with GNN model parameters in both global and local fashions.
GNN training with adaptive connection sampling is shown to be mathematically
equivalent to an efficient approximation of training Bayesian GNNs.
Experimental results with ablation studies on benchmark datasets validate that
adaptively learning the sampling rate given graph training data is the key to
boost the performance of GNNs in semi-supervised node classification, less
prone to over-smoothing and over-fitting with more robust prediction.
Related papers
- DFA-GNN: Forward Learning of Graph Neural Networks by Direct Feedback Alignment [57.62885438406724]
Graph neural networks are recognized for their strong performance across various applications.
BP has limitations that challenge its biological plausibility and affect the efficiency, scalability and parallelism of training neural networks for graph-based tasks.
We propose DFA-GNN, a novel forward learning framework tailored for GNNs with a case study of semi-supervised learning.
arXiv Detail & Related papers (2024-06-04T07:24:51Z) - Online GNN Evaluation Under Test-time Graph Distribution Shifts [92.4376834462224]
A new research problem, online GNN evaluation, aims to provide valuable insights into the well-trained GNNs's ability to generalize to real-world unlabeled graphs.
We develop an effective learning behavior discrepancy score, dubbed LeBeD, to estimate the test-time generalization errors of well-trained GNN models.
arXiv Detail & Related papers (2024-03-15T01:28:08Z) - Accurate and Scalable Estimation of Epistemic Uncertainty for Graph
Neural Networks [40.95782849532316]
We propose a novel training framework designed to improve intrinsic GNN uncertainty estimates.
Our framework adapts the principle of centering data to graph data through novel graph anchoring strategies.
Our work provides insights into uncertainty estimation for GNNs, and demonstrates the utility of G-$Delta$UQ in obtaining reliable estimates.
arXiv Detail & Related papers (2024-01-07T00:58:33Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Distribution Free Prediction Sets for Node Classification [0.0]
We leverage recent advances in conformal prediction to construct prediction sets for node classification in inductive learning scenarios.
We show through experiments on standard benchmark datasets using popular GNN models that our approach provides tighter and better prediction sets than a naive application of conformal prediction.
arXiv Detail & Related papers (2022-11-26T12:54:45Z) - Graph Neural Network Based Node Deployment for Throughput Enhancement [20.56966053013759]
We propose a novel graph neural network (GNN) method for the network node deployment problem.
We show that an expressive GNN has the capacity to approximate both the function value and the traffic permutation, as a theoretic support for the proposed method.
arXiv Detail & Related papers (2022-08-19T08:06:28Z) - Stochastic Graph Neural Networks [123.39024384275054]
Graph neural networks (GNNs) model nonlinear representations in graph data with applications in distributed agent coordination, control, and planning.
Current GNN architectures assume ideal scenarios and ignore link fluctuations that occur due to environment, human factors, or external attacks.
In these situations, the GNN fails to address its distributed task if the topological randomness is not considered accordingly.
arXiv Detail & Related papers (2020-06-04T08:00:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.