Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning
- URL: http://arxiv.org/abs/2411.02809v1
- Date: Tue, 05 Nov 2024 04:52:20 GMT
- Title: Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning
- Authors: Jinyin Chen, Wenbo Mu, Luxin Zhang, Guohan Huang, Haibin Zheng, Yao Cheng,
- Abstract summary: A query-efficient hybrid adversarial attack framework is proposed.
A shadow model is established based on the manipulated data to simulate the behavior of the server model.
Experiments on five real-world benchmarks demonstrate that NA2 improves the performance of the centralized adversarial attacks against VFGL.
- Score: 5.784274742483707
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural network (GNN) has captured wide attention due to its capability of graph representation learning for graph-structured data. However, the distributed data silos limit the performance of GNN. Vertical federated learning (VFL), an emerging technique to process distributed data, successfully makes GNN possible to handle the distributed graph-structured data. Despite the prosperous development of vertical federated graph learning (VFGL), the robustness of VFGL against the adversarial attack has not been explored yet. Although numerous adversarial attacks against centralized GNNs are proposed, their attack performance is challenged in the VFGL scenario. To the best of our knowledge, this is the first work to explore the adversarial attack against VFGL. A query-efficient hybrid adversarial attack framework is proposed to significantly improve the centralized adversarial attacks against VFGL, denoted as NA2, short for Neuron-based Adversarial Attack. Specifically, a malicious client manipulates its local training data to improve its contribution in a stealthy fashion. Then a shadow model is established based on the manipulated data to simulate the behavior of the server model in VFGL. As a result, the shadow model can improve the attack success rate of various centralized attacks with a few queries. Extensive experiments on five real-world benchmarks demonstrate that NA2 improves the performance of the centralized adversarial attacks against VFGL, achieving state-of-the-art performance even under potential adaptive defense where the defender knows the attack method. Additionally, we provide interpretable experiments of the effectiveness of NA2 via sensitive neurons identification and visualization of t-SNE.
Related papers
- VGFL-SA: Vertical Graph Federated Learning Structure Attack Based on Contrastive Learning [16.681157857248436]
Graph Neural Networks (GNNs) have gained attention for their ability to learn representations from graph data.
Recent studies have shown that Vertical Graph Federated Learning frameworks are vulnerable to adversarial attacks that degrade performance.
We propose a novel graph adversarial attack against VGFL, referred to as VGFL-SA, to degrade the performance of VGFL by modifying the local clients structure without using labels.
arXiv Detail & Related papers (2025-02-24T03:04:48Z) - Backdoor Attack on Vertical Federated Graph Neural Network Learning [6.540725813096829]
Federated Graph Neural Network (FedGNN) integrate federated learning with graph neural networks (GNNs) to enable privacy-preserving training on distributed graph data.
Vertical Federated Graph Neural Network (VFGNN) handles scenarios where data features and labels are distributed among participants.
Despite the robust privacy-preserving design of VFGNN, we have found that it still faces the risk of backdoor attacks.
This paper proposes BVG, a novel backdoor attack method that leverages multi-hop triggers and backdoor retention.
arXiv Detail & Related papers (2024-10-15T05:26:20Z) - Top K Enhanced Reinforcement Learning Attacks on Heterogeneous Graph Node Classification [1.4943280454145231]
Graph Neural Networks (GNNs) have attracted substantial interest due to their exceptional performance on graph-based data.
Their robustness, especially on heterogeneous graphs, remains underexplored, particularly against adversarial attacks.
This paper proposes HeteroKRLAttack, a targeted evasion black-box attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-08-04T08:44:00Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Talos: A More Effective and Efficient Adversarial Defense for GNN Models Based on the Global Homophily of Graphs [2.4866716181615467]
Graph neural network (GNN) models are susceptible to adversarial attacks.
We propose a new defense method named Talos, which enhances the global, rather than local, homophily of graphs as a defense.
arXiv Detail & Related papers (2024-06-06T08:08:01Z) - Disttack: Graph Adversarial Attacks Toward Distributed GNN Training [18.487718294296442]
Graph Neural Networks (GNNs) have emerged as potent models for graph learning.
We introduce Disttack, the first framework of adversarial attacks for distributed GNN training.
We show that Disttack amplifies the model accuracy degradation by 2.75$times$ and achieves speedup by 17.33$times$ on average.
arXiv Detail & Related papers (2024-05-10T05:09:59Z) - Data-Agnostic Model Poisoning against Federated Learning: A Graph
Autoencoder Approach [65.2993866461477]
This paper proposes a data-agnostic, model poisoning attack on Federated Learning (FL)
The attack requires no knowledge of FL training data and achieves both effectiveness and undetectability.
Experiments show that the FL accuracy drops gradually under the proposed attack and existing defense mechanisms fail to detect it.
arXiv Detail & Related papers (2023-11-30T12:19:10Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based
Vertical Federated Learning [2.23816711660697]
vertical federated learning (VFL) is proposed to implement local data protection through training a global model.
For graph-structured data, it is natural idea to construct VFL framework with GNN models.
GNN models are proven to be vulnerable to adversarial attacks.
This paper reveals that GVFL is vulnerable to adversarial attack similar to centralized GNN models.
arXiv Detail & Related papers (2021-10-13T03:06:02Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.