Adversarial Attacks on Fairness of Graph Neural Networks
- URL: http://arxiv.org/abs/2310.13822v2
- Date: Sun, 3 Mar 2024 01:38:40 GMT
- Title: Adversarial Attacks on Fairness of Graph Neural Networks
- Authors: Binchi Zhang, Yushun Dong, Chen Chen, Yada Zhu, Minnan Luo, Jundong Li
- Abstract summary: Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
- Score: 63.155299388146176
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fairness-aware graph neural networks (GNNs) have gained a surge of attention
as they can reduce the bias of predictions on any demographic group (e.g.,
female) in graph-based applications. Although these methods greatly improve the
algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully
designed adversarial attacks. In this paper, we investigate the problem of
adversarial attacks on fairness of GNNs and propose G-FairAttack, a general
framework for attacking various types of fairness-aware GNNs in terms of
fairness with an unnoticeable effect on prediction utility. In addition, we
propose a fast computation technique to reduce the time complexity of
G-FairAttack. The experimental study demonstrates that G-FairAttack
successfully corrupts the fairness of different types of GNNs while keeping the
attack unnoticeable. Our study on fairness attacks sheds light on potential
vulnerabilities in fairness-aware GNNs and guides further research on the
robustness of GNNs in terms of fairness.
Related papers
- Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections [28.86365261170078]
Research has revealed the fairness vulnerabilities in Graph Neural Networks (GNNs) when facing malicious adversarial attacks.
We introduce a Node Injection-based Fairness Attack (NIFA) exploring the vulnerabilities of GNN fairness in such a more realistic setting.
NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes.
arXiv Detail & Related papers (2024-06-05T08:26:53Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Better Fair than Sorry: Adversarial Missing Data Imputation for Fair
GNNs [6.680930089714339]
This paper addresses the problem of learning fair Graph Neural Networks (GNNs) under missing protected attributes.
We propose Better Fair than Sorry (BFtS), a fair missing data imputation model for protected attributes used by fair GNNs.
arXiv Detail & Related papers (2023-11-02T20:57:44Z) - Towards Fair Graph Neural Networks via Graph Counterfactual [38.721295940809135]
Graph neural networks (GNNs) have shown great ability in representation (GNNs) learning on graphs, facilitating various tasks.
Recent works show that GNNs tend to inherit and amplify the bias from training data, causing concerns of the adoption of GNNs in high-stake scenarios.
We propose a novel framework CAF, which can select counterfactuals from training data to avoid non-realistic counterfactuals.
arXiv Detail & Related papers (2023-07-10T23:28:03Z) - Fairness-Aware Graph Neural Networks: A Survey [53.41838868516936]
Graph Neural Networks (GNNs) have become increasingly important due to their representational power and state-of-the-art predictive performance.
GNNs suffer from fairness issues that arise as a result of the underlying graph data and the fundamental aggregation mechanism.
In this article, we examine and categorize fairness techniques for improving the fairness of GNNs.
arXiv Detail & Related papers (2023-07-08T08:09:06Z) - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
Neural Networks [15.116231694800787]
We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.
These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender.
We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions.
arXiv Detail & Related papers (2022-09-13T12:46:57Z) - A Comprehensive Survey on Trustworthy Graph Neural Networks: Privacy,
Robustness, Fairness, and Explainability [59.80140875337769]
Graph Neural Networks (GNNs) have made rapid developments in the recent years.
GNNs can leak private information, are vulnerable to adversarial attacks, can inherit and magnify societal bias from training data.
This paper gives a comprehensive survey of GNNs in the computational aspects of privacy, robustness, fairness, and explainability.
arXiv Detail & Related papers (2022-04-18T21:41:07Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.