Subgroup Generalization and Fairness of Graph Neural Networks
- URL: http://arxiv.org/abs/2106.15535v1
- Date: Tue, 29 Jun 2021 16:13:41 GMT
- Title: Subgroup Generalization and Fairness of Graph Neural Networks
- Authors: Jiaqi Ma, Junwei Deng, Qiaozhu Mei
- Abstract summary: We present a novel PAC-Bayesian analysis for GNNs under a non-IID semi-supervised learning setup.
We further study an accuracy-(dis)parity-style (un)fairness of GNNs from a theoretical perspective.
- Score: 12.88476464580968
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite enormous successful applications of graph neural networks (GNNs)
recently, theoretical understandings of their generalization ability,
especially for node-level tasks where data are not independent and
identically-distributed (IID), have been sparse. The theoretical investigation
of the generalization performance is beneficial for understanding fundamental
issues (such as fairness) of GNN models and designing better learning methods.
In this paper, we present a novel PAC-Bayesian analysis for GNNs under a
non-IID semi-supervised learning setup. Moreover, we analyze the generalization
performances on different subgroups of unlabeled nodes, which allows us to
further study an accuracy-(dis)parity-style (un)fairness of GNNs from a
theoretical perspective. Under reasonable assumptions, we demonstrate that the
distance between a test subgroup and the training set can be a key factor
affecting the GNN performance on that subgroup, which calls special attention
to the training node selection for fair learning. Experiments across multiple
GNN models and datasets support our theoretical results.
Related papers
- Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
Graph Neural Networks (GNNs) combine information from adjacent nodes by successive applications of graph convolutions.
We study the generalization gaps of GNNs on both node-level and graph-level tasks.
We show that the generalization gaps decrease with the number of nodes in the training graphs.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Towards Understanding the Generalization of Graph Neural Networks [9.217947432437546]
Graph neural networks (GNNs) are the most widely adopted model in graph-structured data oriented learning and representation.
We first establish high probability bounds of generalization gap and gradients in transductive learning.
The theoretical results reveal the architecture specific factors affecting the generalization gap.
arXiv Detail & Related papers (2023-05-14T03:05:14Z) - GNN-Ensemble: Towards Random Decision Graph Neural Networks [3.7620848582312405]
Graph Neural Networks (GNNs) have enjoyed wide spread applications in graph-structured data.
GNNs are required to learn latent patterns from a limited amount of training data to perform inferences on a vast amount of test data.
In this paper, we push one step forward on the ensemble learning of GNNs with improved accuracy, robustness, and adversarial attacks.
arXiv Detail & Related papers (2023-03-20T18:24:01Z) - Graph Neural Networks are Inherently Good Generalizers: Insights by
Bridging GNNs and MLPs [71.93227401463199]
This paper pinpoints the major source of GNNs' performance gain to their intrinsic capability, by introducing an intermediate model class dubbed as P(ropagational)MLP.
We observe that PMLPs consistently perform on par with (or even exceed) their GNN counterparts, while being much more efficient in training.
arXiv Detail & Related papers (2022-12-18T08:17:32Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Optimization and Generalization Analysis of Transduction through
Gradient Boosting and Application to Multi-scale Graph Neural Networks [60.22494363676747]
It is known that the current graph neural networks (GNNs) are difficult to make themselves deep due to the problem known as over-smoothing.
Multi-scale GNNs are a promising approach for mitigating the over-smoothing problem.
We derive the optimization and generalization guarantees of transductive learning algorithms that include multi-scale GNNs.
arXiv Detail & Related papers (2020-06-15T17:06:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.