Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks
- URL: http://arxiv.org/abs/2405.12295v2
- Date: Tue, 4 Jun 2024 22:08:09 GMT
- Title: Efficient Model-Stealing Attacks Against Inductive Graph Neural Networks
- Authors: Marcin Podhajski, Jan DubiĆski, Franziska Boenisch, Adam Dziedzic, Agnieszka Pregowska, Tomasz Michalak,
- Abstract summary: This paper introduces a novel method for unsupervised model-stealing attacks against inductive GNNs.
It is based on graph contrasting learning and spectral graph augmentations to efficiently extract information from the target model.
The results show that this approach demonstrates a higher level of efficiency compared to existing stealing attacks.
- Score: 4.775113207763946
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are recognized as potent tools for processing real-world data organized in graph structures. Especially inductive GNNs, which enable the processing of graph-structured data without relying on predefined graph structures, are gaining importance in an increasingly wide variety of applications. As these networks demonstrate proficiency across a range of tasks, they become lucrative targets for model-stealing attacks where an adversary seeks to replicate the functionality of the targeted network. A large effort has been made to develop model-stealing attacks that focus on models trained with images and texts. However, little attention has been paid to GNNs trained on graph data. This paper introduces a novel method for unsupervised model-stealing attacks against inductive GNNs, based on graph contrasting learning and spectral graph augmentations to efficiently extract information from the target model. The proposed attack is thoroughly evaluated on six datasets. The results show that this approach demonstrates a higher level of efficiency compared to existing stealing attacks. More concretely, our attack outperforms the baseline on all benchmarks achieving higher fidelity and downstream accuracy of the stolen model while requiring fewer queries sent to the target model.
Related papers
- RIDA: A Robust Attack Framework on Incomplete Graphs [19.257308956424207]
We introduce the Robust Incomplete Deep Attack Framework (RIDA)
RIDA is the first algorithm for robust gray-box poisoning attacks on incomplete graphs.
Extensive tests against 9 SOTA baselines on 3 real-world datasets demonstrate RIDA's superiority in handling incompleteness and high attack performance on the incomplete graph.
arXiv Detail & Related papers (2024-07-25T16:33:35Z) - Graph Transductive Defense: a Two-Stage Defense for Graph Membership Inference Attacks [50.19590901147213]
Graph neural networks (GNNs) have become instrumental in diverse real-world applications, offering powerful graph learning capabilities.
GNNs are vulnerable to adversarial attacks, including membership inference attacks (MIA)
This paper proposes an effective two-stage defense, Graph Transductive Defense (GTD), tailored to graph transductive learning characteristics.
arXiv Detail & Related papers (2024-06-12T06:36:37Z) - Exploiting Global Graph Homophily for Generalized Defense in Graph Neural Networks [2.4866716181615467]
Graph neural network (GNN) models are susceptible to adversarial attacks.
We propose a new defense method named Talos, which enhances the global, rather than local, homophily of graphs as a defense.
arXiv Detail & Related papers (2024-06-06T08:08:01Z) - Everything Perturbed All at Once: Enabling Differentiable Graph Attacks [61.61327182050706]
Graph neural networks (GNNs) have been shown to be vulnerable to adversarial attacks.
We propose a novel attack method called Differentiable Graph Attack (DGA) to efficiently generate effective attacks.
Compared to the state-of-the-art, DGA achieves nearly equivalent attack performance with 6 times less training time and 11 times smaller GPU memory footprint.
arXiv Detail & Related papers (2023-08-29T20:14:42Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Model Stealing Attacks Against Inductive Graph Neural Networks [15.334336995523302]
Graph neural networks (GNNs) have been proposed to fully leverage graph data to build powerful applications.
Previous research has shown that machine learning models are prone to model stealing attacks.
This paper proposes the first model stealing attacks against inductive GNNs.
arXiv Detail & Related papers (2021-12-15T18:29:22Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial Attack on Hierarchical Graph Pooling Neural Networks [14.72310134429243]
We study the robustness of graph neural networks (GNNs) for graph classification tasks.
In this paper, we propose an adversarial attack framework for the graph classification task.
To the best of our knowledge, this is the first work on the adversarial attack against hierarchical GNN-based graph classification models.
arXiv Detail & Related papers (2020-05-23T16:19:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.