Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
Neural Networks
- URL: http://arxiv.org/abs/2209.05957v2
- Date: Fri, 16 Dec 2022 15:03:15 GMT
- Title: Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
Neural Networks
- Authors: Hussain Hussain, Meng Cao, Sandipan Sikdar, Denis Helic, Elisabeth
Lex, Markus Strohmaier, Roman Kern
- Abstract summary: We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.
These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender.
We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions.
- Score: 15.116231694800787
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present evidence for the existence and effectiveness of adversarial
attacks on graph neural networks (GNNs) that aim to degrade fairness. These
attacks can disadvantage a particular subgroup of nodes in GNN-based node
classification, where nodes of the underlying network have sensitive
attributes, such as race or gender. We conduct qualitative and experimental
analyses explaining how adversarial link injection impairs the fairness of GNN
predictions. For example, an attacker can compromise the fairness of GNN-based
node classification by injecting adversarial links between nodes belonging to
opposite subgroups and opposite class labels. Our experiments on empirical
datasets demonstrate that adversarial fairness attacks can significantly
degrade the fairness of GNN predictions (attacks are effective) with a low
perturbation rate (attacks are efficient) and without a significant drop in
accuracy (attacks are deceptive). This work demonstrates the vulnerability of
GNN models to adversarial fairness attacks. We hope our findings raise
awareness about this issue in our community and lay a foundation for the future
development of GNN models that are more robust to such attacks.
Related papers
- Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections [28.86365261170078]
Research has revealed the fairness vulnerabilities in Graph Neural Networks (GNNs) when facing malicious adversarial attacks.
We introduce a Node Injection-based Fairness Attack (NIFA) exploring the vulnerabilities of GNN fairness in such a more realistic setting.
NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes.
arXiv Detail & Related papers (2024-06-05T08:26:53Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Uncertainty-Matching Graph Neural Networks to Defend Against Poisoning
Attacks [43.60973654460398]
Graph Neural Networks (GNNs) are generalizations of neural networks to graph-structured data.
GNNs are vulnerable to adversarial attacks, i.e., a small perturbation to the structure can lead to a non-trivial performance degradation.
We propose Uncertainty Matching GNN (UM-GNN), that is aimed at improving the robustness of GNN models.
arXiv Detail & Related papers (2020-09-30T05:29:42Z) - Towards More Practical Adversarial Attacks on Graph Neural Networks [14.78539966828287]
We study the black-box attacks on graph neural networks (GNNs) under a novel and realistic constraint.
We show that the structural inductive biases of GNN models can be an effective source for this type of attacks.
arXiv Detail & Related papers (2020-06-09T05:27:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.