Endowing Pre-trained Graph Models with Provable Fairness
- URL: http://arxiv.org/abs/2402.12161v2
- Date: Tue, 20 Feb 2024 09:03:43 GMT
- Title: Endowing Pre-trained Graph Models with Provable Fairness
- Authors: Zhongjian Zhang, Mengmei Zhang, Yue Yu, Cheng Yang, Jiawei Liu and
Chuan Shi
- Abstract summary: We propose a novel adapter-tuning framework that endows pre-trained graph models with provable fairness (called GraphPAR)
Specifically, we design a sensitive semantic augmenter on node representations, to extend the node representations with different sensitive attribute semantics for each node.
With GraphPAR, we quantify whether the fairness of each node is provable, i.e., predictions are always fair within a certain range of sensitive attribute semantics.
- Score: 49.8431177748876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Pre-trained graph models (PGMs) aim to capture transferable inherent
structural properties and apply them to different downstream tasks. Similar to
pre-trained language models, PGMs also inherit biases from human society,
resulting in discriminatory behavior in downstream applications. The debiasing
process of existing fair methods is generally coupled with parameter
optimization of GNNs. However, different downstream tasks may be associated
with different sensitive attributes in reality, directly employing existing
methods to improve the fairness of PGMs is inflexible and inefficient.
Moreover, most of them lack a theoretical guarantee, i.e., provable lower
bounds on the fairness of model predictions, which directly provides assurance
in a practical scenario. To overcome these limitations, we propose a novel
adapter-tuning framework that endows pre-trained graph models with provable
fairness (called GraphPAR). GraphPAR freezes the parameters of PGMs and trains
a parameter-efficient adapter to flexibly improve the fairness of PGMs in
downstream tasks. Specifically, we design a sensitive semantic augmenter on
node representations, to extend the node representations with different
sensitive attribute semantics for each node. The extended representations will
be used to further train an adapter, to prevent the propagation of sensitive
attribute semantics from PGMs to task predictions. Furthermore, with GraphPAR,
we quantify whether the fairness of each node is provable, i.e., predictions
are always fair within a certain range of sensitive attribute semantics.
Experimental evaluations on real-world datasets demonstrate that GraphPAR
achieves state-of-the-art prediction performance and fairness on node
classification task. Furthermore, based on our GraphPAR, around 90\% nodes have
provable fairness.
Related papers
- HG-Adapter: Improving Pre-Trained Heterogeneous Graph Neural Networks with Dual Adapters [53.97380482341493]
"pre-train, prompt-tuning" has demonstrated impressive performance for tuning pre-trained heterogeneous graph neural networks (HGNNs)
We propose a unified framework that combines two new adapters with potential labeled data extension to improve the generalization of pre-trained HGNN models.
arXiv Detail & Related papers (2024-11-02T06:43:54Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Domain-wise Invariant Learning for Panoptic Scene Graph Generation [26.159312466958]
Panoptic Scene Graph Generation (PSG) involves the detection of objects and the prediction of their corresponding relationships (predicates)
The presence of biased predicate annotations poses a significant challenge for PSG models, as it hinders their ability to establish a clear decision boundary among different predicates.
We propose a novel framework to infer potentially biased annotations by measuring the predicate prediction risks within each subject-object pair.
arXiv Detail & Related papers (2023-10-09T17:03:39Z) - G-Adapter: Towards Structure-Aware Parameter-Efficient Transfer Learning
for Graph Transformer Networks [0.7118812771905295]
We show that it is sub-optimal to directly transfer existing PEFTs to graph-based tasks due to the issue of feature distribution shift.
We propose a novel structure-aware PEFT approach, named G-Adapter, to guide the updating process.
Extensive experiments demonstrate that G-Adapter obtains the state-of-the-art performance compared to the counterparts on nine graph benchmark datasets.
arXiv Detail & Related papers (2023-05-17T16:10:36Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Adaptive Graph-Based Feature Normalization for Facial Expression
Recognition [1.2246649738388389]
We propose an Adaptive Graph-based Feature Normalization (AGFN) method to protect Facial Expression Recognition models from data uncertainties.
Our method outperforms state-of-the-art works with accuracies of 91.84% and 91.11% on benchmark datasets.
arXiv Detail & Related papers (2022-07-22T14:57:56Z) - From Spectral Graph Convolutions to Large Scale Graph Convolutional
Networks [0.0]
Graph Convolutional Networks (GCNs) have been shown to be a powerful concept that has been successfully applied to a large variety of tasks.
We study the theory that paved the way to the definition of GCN, including related parts of classical graph theory.
arXiv Detail & Related papers (2022-07-12T16:57:08Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.