Towards Label Position Bias in Graph Neural Networks
- URL: http://arxiv.org/abs/2305.15822v1
- Date: Thu, 25 May 2023 08:06:42 GMT
- Title: Towards Label Position Bias in Graph Neural Networks
- Authors: Haoyu Han, Xiaorui Liu, Feng Shi, MohamadAli Torkamani, Charu C.
Aggarwal, Jiliang Tang
- Abstract summary: Graph Neural Networks (GNNs) have emerged as a powerful tool for semi-supervised node classification tasks.
Recent studies have revealed various biases in GNNs stemming from both node features and graph topology.
In this work, we uncover a new bias - label position bias, which indicates that the node closer to the labeled nodes tends to perform better.
- Score: 47.39692033598877
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as a powerful tool for
semi-supervised node classification tasks. However, recent studies have
revealed various biases in GNNs stemming from both node features and graph
topology. In this work, we uncover a new bias - label position bias, which
indicates that the node closer to the labeled nodes tends to perform better. We
introduce a new metric, the Label Proximity Score, to quantify this bias, and
find that it is closely related to performance disparities. To address the
label position bias, we propose a novel optimization framework for learning a
label position unbiased graph structure, which can be applied to existing GNNs.
Extensive experiments demonstrate that our proposed method not only outperforms
backbone methods but also significantly mitigates the issue of label position
bias in GNNs.
Related papers
- Mitigating Degree Bias in Signed Graph Neural Networks [5.042342963087923]
Signed Graph Neural Networks (SGNNs) are up against fairness issues from source data and typical aggregation method.
In this paper, we are pioneering to make the investigation of fairness in SGNNs expanded from GNNs.
We identify the issue of degree bias within signed graphs, offering a new perspective on the fairness issues related to SGNNs.
arXiv Detail & Related papers (2024-08-16T03:22:18Z) - GNNEvaluator: Evaluating GNN Performance On Unseen Graphs Without Labels [81.93520935479984]
We study a new problem, GNN model evaluation, that aims to assess the performance of a specific GNN model trained on labeled and observed graphs.
We propose a two-stage GNN model evaluation framework, including (1) DiscGraph set construction and (2) GNNEvaluator training and inference.
Under the effective training supervision from the DiscGraph set, GNNEvaluator learns to precisely estimate node classification accuracy of the to-be-evaluated GNN model.
arXiv Detail & Related papers (2023-10-23T05:51:59Z) - UNREAL:Unlabeled Nodes Retrieval and Labeling for Heavily-imbalanced
Node Classification [17.23736166919287]
skewed label distributions are common in real-world node classification tasks.
In this paper, we propose UNREAL, an iterative over-sampling method.
arXiv Detail & Related papers (2023-03-18T09:23:13Z) - Every Node Counts: Improving the Training of Graph Neural Networks on
Node Classification [9.539495585692007]
We propose novel objective terms for the training of GNNs for node classification.
Our first term seeks to maximize the mutual information between node and label features.
Our second term promotes anisotropic smoothness in the prediction maps.
arXiv Detail & Related papers (2022-11-29T23:25:14Z) - ResNorm: Tackling Long-tailed Degree Distribution Issue in Graph Neural
Networks via Normalization [80.90206641975375]
This paper focuses on improving the performance of GNNs via normalization.
By studying the long-tailed distribution of node degrees in the graph, we propose a novel normalization method for GNNs.
The $scale$ operation of ResNorm reshapes the node-wise standard deviation (NStd) distribution so as to improve the accuracy of tail nodes.
arXiv Detail & Related papers (2022-06-16T13:49:09Z) - Debiased Graph Neural Networks with Agnostic Label Selection Bias [59.61301255860836]
Most existing Graph Neural Networks (GNNs) are proposed without considering the selection bias in data.
We propose a novel Debiased Graph Neural Networks (DGNN) with a differentiated decorrelation regularizer.
Our proposed model outperforms the state-of-the-art methods and DGNN is a flexible framework to enhance existing GNNs.
arXiv Detail & Related papers (2022-01-19T16:50:29Z) - Label-Consistency based Graph Neural Networks for Semi-supervised Node
Classification [47.753422069515366]
Graph neural networks (GNNs) achieve remarkable success in graph-based semi-supervised node classification.
In this paper, we propose label-consistency based graph neural network(LC-GNN), leveraging node pairs unconnected but with the same labels to enlarge the receptive field of nodes in GNNs.
Experiments on benchmark datasets demonstrate the proposed LC-GNN outperforms traditional GNNs in graph-based semi-supervised node classification.
arXiv Detail & Related papers (2020-07-27T11:17:46Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.