More is Better (Mostly): On the Backdoor Attacks in Federated Graph
Neural Networks
- URL: http://arxiv.org/abs/2202.03195v5
- Date: Thu, 20 Apr 2023 17:47:15 GMT
- Title: More is Better (Mostly): On the Backdoor Attacks in Federated Graph
Neural Networks
- Authors: Jing Xu, Rui Wang, Stefanos Koffas, Kaitai Liang, Stjepan Picek
- Abstract summary: Graph Neural Networks (GNNs) are a class of deep learning-based methods for processing graph domain information.
This paper conducts two types of backdoor attacks in Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor attacks (DBA)
We find that both attacks are robust against the investigated defense, necessitating the need to consider backdoor attacks in Federated GNNs as a novel threat.
- Score: 15.288284812871945
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a class of deep learning-based methods for
processing graph domain information. GNNs have recently become a widely used
graph analysis method due to their superior ability to learn representations
for complex graph data. However, due to privacy concerns and regulation
restrictions, centralized GNNs can be difficult to apply to data-sensitive
scenarios. Federated learning (FL) is an emerging technology developed for
privacy-preserving settings when several parties need to train a shared global
model collaboratively. Although several research works have applied FL to train
GNNs (Federated GNNs), there is no research on their robustness to backdoor
attacks.
This paper bridges this gap by conducting two types of backdoor attacks in
Federated GNNs: centralized backdoor attacks (CBA) and distributed backdoor
attacks (DBA). Our experiments show that the DBA attack success rate is higher
than CBA in almost all evaluated cases. For CBA, the attack success rate of all
local triggers is similar to the global trigger even if the training set of the
adversarial party is embedded with the global trigger. To further explore the
properties of two backdoor attacks in Federated GNNs, we evaluate the attack
performance for a different number of clients, trigger sizes, poisoning
intensities, and trigger densities. Moreover, we explore the robustness of DBA
and CBA against one defense. We find that both attacks are robust against the
investigated defense, necessitating the need to consider backdoor attacks in
Federated GNNs as a novel threat that requires custom defenses.
Related papers
- Backdoor Attack on Vertical Federated Graph Neural Network Learning [6.540725813096829]
Federated Graph Neural Network (FedGNN) is a privacy-preserving machine learning technology that combines federated learning (FL) and graph neural networks (GNNs)
Vertical Federated Graph Neural Network (VFGNN) is an important branch of FedGNN, where data features and labels are distributed among participants.
Due to the difficulty of accessing and modifying distributed data and labels, the vulnerability of VFGNN to backdoor attacks remains largely unexplored.
We propose BVG, the first method for backdoor attacks in VFGNN. Without accessing or modifying labels, BVG uses multi-hop triggers and requires only four
arXiv Detail & Related papers (2024-10-15T05:26:20Z) - "No Matter What You Do": Purifying GNN Models via Backdoor Unlearning [33.07926413485209]
backdoor attacks in GNNs lie in the fact that the attacker modifies a portion of graph data by embedding triggers.
We present GCleaner, the first backdoor mitigation method on GNNs.
GCleaner can reduce the backdoor attack success rate to 10% with only 1% of clean data, and has almost negligible degradation in model performance.
arXiv Detail & Related papers (2024-10-02T06:30:49Z) - Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Graph Agent Network: Empowering Nodes with Inference Capabilities for Adversarial Resilience [50.460555688927826]
We propose the Graph Agent Network (GAgN) to address the vulnerabilities of graph neural networks (GNNs)
GAgN is a graph-structured agent network in which each node is designed as an 1-hop-view agent.
Agents' limited view prevents malicious messages from propagating globally in GAgN, thereby resisting global-optimization-based secondary attacks.
arXiv Detail & Related papers (2023-06-12T07:27:31Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Transferable Graph Backdoor Attack [13.110473828583725]
Graph Neural Networks (GNNs) have achieved tremendous success in many graph mining tasks.
GNNs are found to be vulnerable to unnoticeable perturbations on both graph structure and node features.
In this paper, we disclose the TRAP attack, a Transferable GRAPh backdoor attack.
arXiv Detail & Related papers (2022-06-21T06:25:37Z) - Robustness of Graph Neural Networks at Scale [63.45769413975601]
We study how to attack and defend Graph Neural Networks (GNNs) at scale.
We propose two sparsity-aware first-order optimization attacks that maintain an efficient representation.
We show that common surrogate losses are not well-suited for global attacks on GNNs.
arXiv Detail & Related papers (2021-10-26T21:31:17Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Explainability-based Backdoor Attacks Against Graph Neural Networks [9.179577599489559]
There are numerous works on backdoor attacks on neural networks, but only a few works consider graph neural networks (GNNs)
We apply two powerful GNN explainability approaches to select the optimal trigger injecting position to achieve two attacker objectives -- high attack success rate and low clean accuracy drop.
Our empirical results on benchmark datasets and state-of-the-art neural network models demonstrate the proposed method's effectiveness.
arXiv Detail & Related papers (2021-04-08T10:43:40Z) - Backdoor Attacks to Graph Neural Networks [73.56867080030091]
We propose the first backdoor attack to graph neural networks (GNN)
In our backdoor attack, a GNN predicts an attacker-chosen target label for a testing graph once a predefined subgraph is injected to the testing graph.
Our empirical results show that our backdoor attacks are effective with a small impact on a GNN's prediction accuracy for clean testing graphs.
arXiv Detail & Related papers (2020-06-19T14:51:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.