On Generalized Degree Fairness in Graph Neural Networks
- URL: http://arxiv.org/abs/2302.03881v1
- Date: Wed, 8 Feb 2023 05:00:37 GMT
- Title: On Generalized Degree Fairness in Graph Neural Networks
- Authors: Zemin Liu, Trung-Kien Nguyen, Yuan Fang
- Abstract summary: We propose a novel GNN framework called Generalized Degree Fairness-centric Graph Neural Network (Deg-FairGNN)
Specifically, in each GNN layer, we employ a learnable debiasing function to generate debiasing contexts.
Extensive experiments on three benchmark datasets demonstrate the effectiveness of our model on both accuracy and fairness metrics.
- Score: 18.110053023118294
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Conventional graph neural networks (GNNs) are often confronted with fairness
issues that may stem from their input, including node attributes and neighbors
surrounding a node. While several recent approaches have been proposed to
eliminate the bias rooted in sensitive attributes, they ignore the other key
input of GNNs, namely the neighbors of a node, which can introduce bias since
GNNs hinge on neighborhood structures to generate node representations. In
particular, the varying neighborhood structures across nodes, manifesting
themselves in drastically different node degrees, give rise to the diverse
behaviors of nodes and biased outcomes. In this paper, we first define and
generalize the degree bias using a generalized definition of node degree as a
manifestation and quantification of different multi-hop structures around
different nodes. To address the bias in the context of node classification, we
propose a novel GNN framework called Generalized Degree Fairness-centric Graph
Neural Network (Deg-FairGNN). Specifically, in each GNN layer, we employ a
learnable debiasing function to generate debiasing contexts, which modulate the
layer-wise neighborhood aggregation to eliminate the degree bias originating
from the diverse degrees among nodes. Extensive experiments on three benchmark
datasets demonstrate the effectiveness of our model on both accuracy and
fairness metrics.
Related papers
- Mitigating Degree Bias in Signed Graph Neural Networks [5.042342963087923]
Signed Graph Neural Networks (SGNNs) are up against fairness issues from source data and typical aggregation method.
In this paper, we are pioneering to make the investigation of fairness in SGNNs expanded from GNNs.
We identify the issue of degree bias within signed graphs, offering a new perspective on the fairness issues related to SGNNs.
arXiv Detail & Related papers (2024-08-16T03:22:18Z) - Degree-based stratification of nodes in Graph Neural Networks [66.17149106033126]
We modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group.
This simple-to-implement modification seems to improve performance across datasets and GNN methods.
arXiv Detail & Related papers (2023-12-16T14:09:23Z) - The Snowflake Hypothesis: Training Deep GNN with One Node One Receptive
field [39.679151680622375]
We introduce the Snowflake Hypothesis -- a novel paradigm underpinning the concept of one node, one receptive field''
We employ the simplest gradient and node-level cosine distance as guiding principles to regulate the aggregation depth for each node.
The observational results demonstrate that our hypothesis can serve as a universal operator for a range of tasks.
arXiv Detail & Related papers (2023-08-19T15:21:12Z) - Revisiting Heterophily For Graph Neural Networks [42.41238892727136]
Graph Neural Networks (GNNs) extend basic Neural Networks (NNs) by using graph structures based on the relational inductive bias (homophily assumption)
Recent work has identified a non-trivial set of datasets where their performance compared to NNs is not satisfactory.
arXiv Detail & Related papers (2022-10-14T08:00:26Z) - A Variational Edge Partition Model for Supervised Graph Representation
Learning [51.30365677476971]
This paper introduces a graph generative process to model how the observed edges are generated by aggregating the node interactions over a set of overlapping node communities.
We partition each edge into the summation of multiple community-specific weighted edges and use them to define community-specific GNNs.
A variational inference framework is proposed to jointly learn a GNN based inference network that partitions the edges into different communities, these community-specific GNNs, and a GNN based predictor that combines community-specific GNNs for the end classification task.
arXiv Detail & Related papers (2022-02-07T14:37:50Z) - Graph Neural Networks with Feature and Structure Aware Random Walk [7.143879014059894]
We show that in typical heterphilous graphs, the edges may be directed, and whether to treat the edges as is or simply make them undirected greatly affects the performance of the GNN models.
We develop a model that adaptively learns the directionality of the graph, and exploits the underlying long-distance correlations between nodes.
arXiv Detail & Related papers (2021-11-19T08:54:21Z) - Node-wise Localization of Graph Neural Networks [52.04194209002702]
Graph neural networks (GNNs) emerge as a powerful family of representation learning models on graphs.
We propose a node-wise localization of GNNs by accounting for both global and local aspects of the graph.
We conduct extensive experiments on four benchmark graphs, and consistently obtain promising performance surpassing the state-of-the-art GNNs.
arXiv Detail & Related papers (2021-10-27T10:02:03Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z) - Unifying Graph Convolutional Neural Networks and Label Propagation [73.82013612939507]
We study the relationship between LPA and GCN in terms of two aspects: feature/label smoothing and feature/label influence.
Based on our theoretical analysis, we propose an end-to-end model that unifies GCN and LPA for node classification.
Our model can also be seen as learning attention weights based on node labels, which is more task-oriented than existing feature-based attention models.
arXiv Detail & Related papers (2020-02-17T03:23:13Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.