Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based
Vertical Federated Learning
- URL: http://arxiv.org/abs/2110.06468v1
- Date: Wed, 13 Oct 2021 03:06:02 GMT
- Title: Graph-Fraudster: Adversarial Attacks on Graph Neural Network Based
Vertical Federated Learning
- Authors: Jinyin Chen, Guohan Huang, Shanqing Yu, Wenrong Jiang, Chen Cui
- Abstract summary: vertical federated learning (VFL) is proposed to implement local data protection through training a global model.
For graph-structured data, it is natural idea to construct VFL framework with GNN models.
GNN models are proven to be vulnerable to adversarial attacks.
This paper reveals that GVFL is vulnerable to adversarial attack similar to centralized GNN models.
- Score: 2.23816711660697
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural network (GNN) models have achieved great success on graph
representation learning. Challenged by large scale private data collection from
user-side, GNN models may not be able to reflect the excellent performance,
without rich features and complete adjacent relationships. Addressing to the
problem, vertical federated learning (VFL) is proposed to implement local data
protection through training a global model collaboratively. Consequently, for
graph-structured data, it is natural idea to construct VFL framework with GNN
models. However, GNN models are proven to be vulnerable to adversarial attacks.
Whether the vulnerability will be brought into the VFL has not been studied. In
this paper, we devote to study the security issues of GNN based VFL (GVFL),
i.e., robustness against adversarial attacks. Further, we propose an
adversarial attack method, named Graph-Fraudster. It generates adversarial
perturbations based on the noise-added global node embeddings via GVFL's
privacy leakage, and the gradient of pairwise node. First, it steals the global
node embeddings and sets up a shadow server model for attack generator. Second,
noises are added into node embeddings to confuse the shadow server model. At
last, the gradient of pairwise node is used to generate attacks with the
guidance of noise-added node embeddings. To the best of our knowledge, this is
the first study of adversarial attacks on GVFL. The extensive experiments on
five benchmark datasets demonstrate that Graph-Fraudster performs better than
three possible baselines in GVFL. Furthermore, Graph-Fraudster can remain a
threat to GVFL even if two possible defense mechanisms are applied. This paper
reveals that GVFL is vulnerable to adversarial attack similar to centralized
GNN models.
Related papers
- Link Stealing Attacks Against Inductive Graph Neural Networks [60.931106032824275]
A graph neural network (GNN) is a type of neural network that is specifically designed to process graph-structured data.
Previous work has shown that transductive GNNs are vulnerable to a series of privacy attacks.
This paper conducts a comprehensive privacy analysis of inductive GNNs through the lens of link stealing attacks.
arXiv Detail & Related papers (2024-05-09T14:03:52Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Bandits for Structure Perturbation-based Black-box Attacks to Graph
Neural Networks with Theoretical Guarantees [60.61846004535707]
Graph neural networks (GNNs) have achieved state-of-the-art performance in many graph-based tasks.
An attacker can mislead GNN models by slightly perturbing the graph structure.
In this paper, we consider black-box attacks to GNNs with structure perturbation as well as with theoretical guarantees.
arXiv Detail & Related papers (2022-05-07T04:17:25Z) - Adapting Membership Inference Attacks to GNN for Graph Classification:
Approaches and Implications [32.631077336656936]
Membership Inference Attack (MIA) against Graph Neural Networks (GNNs) raises severe privacy concerns.
We take the first step in MIA against GNNs for graph-level classification.
We present and implement two types of attacks, i.e., training-based attacks and threshold-based attacks from different adversarial capabilities.
arXiv Detail & Related papers (2021-10-17T08:41:21Z) - A Hard Label Black-box Adversarial Attack Against Graph Neural Networks [25.081630882605985]
We conduct a systematic study on adversarial attacks against GNNs for graph classification via perturbing the graph structure.
We formulate our attack as an optimization problem, whose objective is to minimize the number of edges to be perturbed in a graph while maintaining the high attack success rate.
Our experimental results on three real-world datasets demonstrate that our attack can effectively attack representative GNNs for graph classification with less queries and perturbations.
arXiv Detail & Related papers (2021-08-21T14:01:34Z) - Graph Structure Learning for Robust Graph Neural Networks [63.04935468644495]
Graph Neural Networks (GNNs) are powerful tools in representation learning for graphs.
Recent studies show that GNNs are vulnerable to carefully-crafted perturbations, called adversarial attacks.
We propose a general framework Pro-GNN, which can jointly learn a structural graph and a robust graph neural network model.
arXiv Detail & Related papers (2020-05-20T17:07:05Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.