Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
- URL: http://arxiv.org/abs/2406.03052v2
- Date: Wed, 30 Oct 2024 11:56:49 GMT
- Title: Are Your Models Still Fair? Fairness Attacks on Graph Neural Networks via Node Injections
- Authors: Zihan Luo, Hong Huang, Yongkang Zhou, Jiping Zhang, Nuo Chen, Hai Jin,
- Abstract summary: Research has revealed the fairness vulnerabilities in Graph Neural Networks (GNNs) when facing malicious adversarial attacks.
We introduce a Node Injection-based Fairness Attack (NIFA) exploring the vulnerabilities of GNN fairness in such a more realistic setting.
NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes.
- Score: 28.86365261170078
- License:
- Abstract: Despite the remarkable capabilities demonstrated by Graph Neural Networks (GNNs) in graph-related tasks, recent research has revealed the fairness vulnerabilities in GNNs when facing malicious adversarial attacks. However, all existing fairness attacks require manipulating the connectivity between existing nodes, which may be prohibited in reality. To this end, we introduce a Node Injection-based Fairness Attack (NIFA), exploring the vulnerabilities of GNN fairness in such a more realistic setting. In detail, NIFA first designs two insightful principles for node injection operations, namely the uncertainty-maximization principle and homophily-increase principle, and then optimizes injected nodes' feature matrix to further ensure the effectiveness of fairness attacks. Comprehensive experiments on three real-world datasets consistently demonstrate that NIFA can significantly undermine the fairness of mainstream GNNs, even including fairness-aware GNNs, by injecting merely 1% of nodes. We sincerely hope that our work can stimulate increasing attention from researchers on the vulnerability of GNN fairness, and encourage the development of corresponding defense mechanisms. Our code and data are released at: https://github.com/CGCL-codes/NIFA.
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - ELEGANT: Certified Defense on the Fairness of Graph Neural Networks [94.10433608311604]
Graph Neural Networks (GNNs) have emerged as a prominent graph learning model in various graph-based tasks.
malicious attackers could easily corrupt the fairness level of their predictions by adding perturbations to the input graph data.
We propose a principled framework named ELEGANT to study a novel problem of certifiable defense on the fairness level of GNNs.
arXiv Detail & Related papers (2023-11-05T20:29:40Z) - Adversarial Attacks on Fairness of Graph Neural Networks [63.155299388146176]
Fairness-aware graph neural networks (GNNs) have gained a surge of attention as they can reduce the bias of predictions on any demographic group.
Although these methods greatly improve the algorithmic fairness of GNNs, the fairness can be easily corrupted by carefully designed adversarial attacks.
arXiv Detail & Related papers (2023-10-20T21:19:54Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - GANI: Global Attacks on Graph Neural Networks via Imperceptible Node
Injections [20.18085461668842]
Graph neural networks (GNNs) have found successful applications in various graph-related tasks.
Recent studies have shown that many GNNs are vulnerable to adversarial attacks.
In this paper, we focus on a realistic attack operation via injecting fake nodes.
arXiv Detail & Related papers (2022-10-23T02:12:26Z) - Adversarial Inter-Group Link Injection Degrades the Fairness of Graph
Neural Networks [15.116231694800787]
We present evidence for the existence and effectiveness of adversarial attacks on graph neural networks (GNNs) that aim to degrade fairness.
These attacks can disadvantage a particular subgroup of nodes in GNN-based node classification, where nodes of the underlying network have sensitive attributes, such as race or gender.
We conduct qualitative and experimental analyses explaining how adversarial link injection impairs the fairness of GNN predictions.
arXiv Detail & Related papers (2022-09-13T12:46:57Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Black-box Node Injection Attack for Graph Neural Networks [29.88729779937473]
We study the possibility of injecting nodes to evade the victim GNN model.
Specifically, we propose GA2C, a graph reinforcement learning framework.
We demonstrate the superior performance of our proposed GA2C over existing state-of-the-art methods.
arXiv Detail & Related papers (2022-02-18T19:17:43Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.