ELEGANT: Certified Defense on the Fairness of Graph Neural Networks
- URL: http://arxiv.org/abs/2311.02757v1
- Date: Sun, 5 Nov 2023 20:29:40 GMT
- Title: ELEGANT: Certified Defense on the Fairness of Graph Neural Networks
- Authors: Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li
- Abstract summary: Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
- Score: 94.10433608311604
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as a prominent graph learning model
in various graph-based tasks over the years. Nevertheless, due to the
vulnerabilities of GNNs, it has been empirically proved that malicious
attackers could easily corrupt the fairness level of their predictions by
adding perturbations to the input graph data. In this paper, we take crucial
steps to study a novel problem of certifiable defense on the fairness level of
GNNs. Specifically, we propose a principled framework named ELEGANT and present
a detailed theoretical certification analysis for the fairness of GNNs. ELEGANT
takes any GNNs as its backbone, and the fairness level of such a backbone is
theoretically impossible to be corrupted under certain perturbation budgets for
attackers. Notably, ELEGANT does not have any assumption over the GNN structure
or parameters, and does not require re-training the GNNs to realize
certification. Hence it can serve as a plug-and-play framework for any
optimized GNNs ready to be deployed. We verify the satisfactory effectiveness
of ELEGANT in practice through extensive experiments on real-world datasets
across different backbones of GNNs, where ELEGANT is also demonstrated to be
beneficial for GNN debiasing. Open-source code can be found at
https://github.com/yushundong/ELEGANT.
Related papers
- IDEA: A Flexible Framework of Certified Unlearning for Graph Neural Networks [68.6374698896505]
Graph Neural Networks (GNNs) have been increasingly deployed in a plethora of applications.
Privacy leakage may happen when the trained GNNs are deployed and exposed to potential attackers.
We propose a principled framework named IDEA to achieve flexible and certified unlearning for GNNs.
arXiv Detail & Related papers (2024-07-28T04:59:59Z) - Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections [28.86365261170078]
Research has revealed the fairness vulnerabilities in Graph Neural Networks (GNNs) when facing malicious adversarial attacks.
We introduce a Node Injection-based Fairness Attack (NIFA) exploring the vulnerabilities of GNN fairness in such a more realistic setting.
NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes.
arXiv Detail & Related papers (2024-06-05T08:26:53Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Optimization of Graph Neural Networks: Implicit Acceleration by Skip
Connections and More Depth [57.10183643449905]
Graph Neural Networks (GNNs) have been studied from the lens of expressive power and generalization.
We study the dynamics of GNNs by studying deep skip optimization.
Our results provide first theoretical support for the success of GNNs.
arXiv Detail & Related papers (2021-05-10T17:59:01Z) - Say No to the Discrimination: Learning Fair Graph Neural Networks with
Limited Sensitive Attribute Information [37.90997236795843]
Graph neural networks (GNNs) have shown great power in modeling graph structured data.
GNNs may make predictions biased on protected sensitive attributes, e.g., skin color and gender.
We propose FairGNN to eliminate the bias of GNNs whilst maintaining high node classification accuracy.
arXiv Detail & Related papers (2020-09-03T05:17:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.