Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization
- URL: http://arxiv.org/abs/2010.12751v2
- Date: Tue, 30 Nov 2021 20:08:49 GMT
- Title: Model Extraction Attacks on Graph Neural Networks: Taxonomy and
Realization
- Authors: Bang Wu, Xiangwen Yang, Shirui Pan, Xingliang Yuan
- Abstract summary: We investigate and develop model extraction attacks against GNN models.
We first formalise the threat modelling in the context of GNN model extraction.
We then present detailed methods which utilise the accessible knowledge in each threat to implement the attacks.
- Score: 40.37373934201329
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning models are shown to face a severe threat from Model
Extraction Attacks, where a well-trained private model owned by a service
provider can be stolen by an attacker pretending as a client. Unfortunately,
prior works focus on the models trained over the Euclidean space, e.g., images
and texts, while how to extract a GNN model that contains a graph structure and
node features is yet to be explored. In this paper, for the first time, we
comprehensively investigate and develop model extraction attacks against GNN
models. We first systematically formalise the threat modelling in the context
of GNN model extraction and classify the adversarial threats into seven
categories by considering different background knowledge of the attacker, e.g.,
attributes and/or neighbour connections of the nodes obtained by the attacker.
Then we present detailed methods which utilise the accessible knowledge in each
threat to implement the attacks. By evaluating over three real-world datasets,
our attacks are shown to extract duplicated models effectively, i.e., 84% - 89%
of the inputs in the target domain have the same output predictions as the
victim model.
Related papers
- Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - OMG-ATTACK: Self-Supervised On-Manifold Generation of Transferable
Evasion Attacks [17.584752814352502]
Evasion Attacks (EA) are used to test the robustness of trained neural networks by distorting input data.
We introduce a self-supervised, computationally economical method for generating adversarial examples.
Our experiments consistently demonstrate the method is effective across various models, unseen data categories, and even defended models.
arXiv Detail & Related papers (2023-10-05T17:34:47Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Property inference attack; Graph neural networks; Privacy attacks and
defense; Trustworthy machine learning [5.598383724295497]
Machine learning models are vulnerable to privacy attacks that leak information about the training data.
In this work, we focus on a particular type of privacy attacks named property inference attack (PIA)
We consider Graph Neural Networks (GNNs) as the target model, and distribution of particular groups of nodes and links in the training graph as the target property.
arXiv Detail & Related papers (2022-09-02T14:59:37Z) - Careful What You Wish For: on the Extraction of Adversarially Trained
Models [2.707154152696381]
Recent attacks on Machine Learning (ML) models pose several security and privacy threats.
We propose a framework to assess extraction attacks on adversarially trained models.
We show that adversarially trained models are more vulnerable to extraction attacks than models obtained under natural training circumstances.
arXiv Detail & Related papers (2022-07-21T16:04:37Z) - Model Stealing Attacks Against Inductive Graph Neural Networks [15.334336995523302]
Graph neural networks (GNNs) have been proposed to fully leverage graph data to build powerful applications.
Previous research has shown that machine learning models are prone to model stealing attacks.
This paper proposes the first model stealing attacks against inductive GNNs.
arXiv Detail & Related papers (2021-12-15T18:29:22Z) - Model Extraction and Defenses on Generative Adversarial Networks [0.9442139459221782]
We study the feasibility of model extraction attacks against generative adversarial networks (GANs)
We propose effective defense techniques to safeguard GANs, considering a trade-off between the utility and security of GAN models.
arXiv Detail & Related papers (2021-01-06T14:36:21Z) - Learning to Attack: Towards Textual Adversarial Attacking in Real-world
Situations [81.82518920087175]
Adversarial attacking aims to fool deep neural networks with adversarial examples.
We propose a reinforcement learning based attack model, which can learn from attack history and launch attacks more efficiently.
arXiv Detail & Related papers (2020-09-19T09:12:24Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Stealing Links from Graph Neural Networks [72.85344230133248]
Recently, neural networks were extended to graph data, which are known as graph neural networks (GNNs)
Due to their superior performance, GNNs have many applications, such as healthcare analytics, recommender systems, and fraud detection.
We propose the first attacks to steal a graph from the outputs of a GNN model that is trained on the graph.
arXiv Detail & Related papers (2020-05-05T13:22:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.