Model Stealing Attacks Against Inductive Graph Neural Networks
- URL: http://arxiv.org/abs/2112.08331v1
- Date: Wed, 15 Dec 2021 18:29:22 GMT
- Title: Model Stealing Attacks Against Inductive Graph Neural Networks
- Authors: Yun Shen, Xinlei He, Yufei Han, Yang Zhang
- Abstract summary: Graph neural networks (GNNs) have been proposed to fully leverage graph data to build powerful applications.
Previous research has shown that machine learning models are prone to model stealing attacks.
This paper proposes the first model stealing attacks against inductive GNNs.
- Score: 15.334336995523302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many real-world data come in the form of graphs. Graph neural networks
(GNNs), a new family of machine learning (ML) models, have been proposed to
fully leverage graph data to build powerful applications. In particular, the
inductive GNNs, which can generalize to unseen data, become mainstream in this
direction. Machine learning models have shown great potential in various tasks
and have been deployed in many real-world scenarios. To train a good model, a
large amount of data as well as computational resources are needed, leading to
valuable intellectual property. Previous research has shown that ML models are
prone to model stealing attacks, which aim to steal the functionality of the
target models. However, most of them focus on the models trained with images
and texts. On the other hand, little attention has been paid to models trained
with graph data, i.e., GNNs. In this paper, we fill the gap by proposing the
first model stealing attacks against inductive GNNs. We systematically define
the threat model and propose six attacks based on the adversary's background
knowledge and the responses of the target models. Our evaluation on six
benchmark datasets shows that the proposed model stealing attacks against GNNs
achieve promising performance.
Related papers
- Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks [4.775113207763946]
This paper introduces a novel method for unsupervised model-stealing attacks against inductive GNNs.
It is based on graph contrasting learning and spectral graph augmentations to efficiently extract information from the target model.
The results show that this approach demonstrates a higher level of efficiency compared to existing stealing attacks.
arXiv Detail & Related papers (2024-05-20T18:01:15Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Data-Free Adversarial Knowledge Distillation for Graph Neural Networks [62.71646916191515]
We propose the first end-to-end framework for data-free adversarial knowledge distillation on graph structured data (DFAD-GNN)
To be specific, our DFAD-GNN employs a generative adversarial network, which mainly consists of three components: a pre-trained teacher model and a student model are regarded as two discriminators, and a generator is utilized for deriving training graphs to distill knowledge from the teacher model into the student model.
Our DFAD-GNN significantly surpasses state-of-the-art data-free baselines in the graph classification task.
arXiv Detail & Related papers (2022-05-08T08:19:40Z) - Membership Inference Attack on Graph Neural Networks [1.6457778420360536]
We focus on how trained GNN models could leak information about the emphmember nodes that they were trained on.
We choose the simplest possible attack model that utilizes the posteriors of the trained model.
The surprising and worrying fact is that the attack is successful even if the target model generalizes well.
arXiv Detail & Related papers (2021-01-17T02:12:35Z) - Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization [40.37373934201329]
We investigate and develop model extraction attacks against GNN models.
We first formalise the threat modelling in the context of GNN model extraction.
We then present detailed methods which utilise the accessible knowledge in each threat to implement the attacks.
arXiv Detail & Related papers (2020-10-24T03:09:37Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.