Large Language Models Merging for Enhancing the Link Stealing Attack on Graph Neural Networks
- URL: http://arxiv.org/abs/2412.05830v1
- Date: Sun, 08 Dec 2024 06:37:05 GMT
- Title: Large Language Models Merging for Enhancing the Link Stealing Attack on Graph Neural Networks
- Authors: Faqian Guan, Tianqing Zhu, Wenhan Chang, Wei Ren, Wanlei Zhou,
- Abstract summary: Link stealing attacks on graph data pose a significant privacy threat.
We find that an attacker can combine the data knowledge of multiple attackers to create a more effective attack model.
We propose a novel link stealing attack method that takes advantage of cross-dataset and Large Language Models.
- Score: 10.807912659961012
- License:
- Abstract: Graph Neural Networks (GNNs), specifically designed to process the graph data, have achieved remarkable success in various applications. Link stealing attacks on graph data pose a significant privacy threat, as attackers aim to extract sensitive relationships between nodes (entities), potentially leading to academic misconduct, fraudulent transactions, or other malicious activities. Previous studies have primarily focused on single datasets and did not explore cross-dataset attacks, let alone attacks that leverage the combined knowledge of multiple attackers. However, we find that an attacker can combine the data knowledge of multiple attackers to create a more effective attack model, which can be referred to cross-dataset attacks. Moreover, if knowledge can be extracted with the help of Large Language Models (LLMs), the attack capability will be more significant. In this paper, we propose a novel link stealing attack method that takes advantage of cross-dataset and Large Language Models (LLMs). The LLM is applied to process datasets with different data structures in cross-dataset attacks. Each attacker fine-tunes the LLM on their specific dataset to generate a tailored attack model. We then introduce a novel model merging method to integrate the parameters of these attacker-specific models effectively. The result is a merged attack model with superior generalization capabilities, enabling effective attacks not only on the attackers' datasets but also on previously unseen (out-of-domain) datasets. We conducted extensive experiments in four datasets to demonstrate the effectiveness of our method. Additional experiments with three different GNN and LLM architectures further illustrate the generality of our approach.
Related papers
- Few Edges Are Enough: Few-Shot Network Attack Detection with Graph Neural Networks [0.0]
This paper introduces Few Edges Are Enough (FEAE) to better distinguish between false positive anomalies and actual attacks.
FEAE achieves competitive performance on two well-known network datasets.
arXiv Detail & Related papers (2025-01-28T14:07:52Z) - Large Language Models for Link Stealing Attacks Against Graph Neural Networks [43.14042095143309]
We introduce Large Language Models (LLMs) to perform link stealing attacks on Graph Neural Networks (GNNs)
LLMs can effectively integrate textual features and exhibit strong generalizability, enabling attacks to handle diverse data dimensions across various datasets.
Our approach significantly enhances the performance of existing link stealing attack tasks in both white-box and black-box scenarios.
arXiv Detail & Related papers (2024-06-22T02:47:24Z) - Susceptibility of Adversarial Attack on Medical Image Segmentation
Models [0.0]
We investigate the effect of adversarial attacks on segmentation models trained on MRI datasets.
We find that medical imaging segmentation models are indeed vulnerable to adversarial attacks.
We show that using a different loss function than the one used for training yields higher adversarial attack success.
arXiv Detail & Related papers (2024-01-20T12:52:20Z) - Model Stealing Attack against Recommender System [85.1927483219819]
Some adversarial attacks have achieved model stealing attacks against recommender systems.
In this paper, we constrain the volume of available target data and queries and utilize auxiliary data, which shares the item set with the target data, to promote model stealing attacks.
arXiv Detail & Related papers (2023-12-18T05:28:02Z) - A Plot is Worth a Thousand Words: Model Information Stealing Attacks via
Scientific Plots [14.998272283348152]
It is well known that an adversary can leverage a target ML model's output to steal the model's information.
We propose a new side channel for model information stealing attacks, i.e., models' scientific plots.
arXiv Detail & Related papers (2023-02-23T12:57:34Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Delving into Data: Effectively Substitute Training for Black-box Attack [84.85798059317963]
We propose a novel perspective substitute training that focuses on designing the distribution of data used in the knowledge stealing process.
The combination of these two modules can further boost the consistency of the substitute model and target model, which greatly improves the effectiveness of adversarial attack.
arXiv Detail & Related papers (2021-04-26T07:26:29Z) - Knowledge-Enriched Distributional Model Inversion Attacks [49.43828150561947]
Model inversion (MI) attacks are aimed at reconstructing training data from model parameters.
We present a novel inversion-specific GAN that can better distill knowledge useful for performing attacks on private models from public data.
Our experiments show that the combination of these techniques can significantly boost the success rate of the state-of-the-art MI attacks by 150%.
arXiv Detail & Related papers (2020-10-08T16:20:48Z) - How Does Data Augmentation Affect Privacy in Machine Learning? [94.52721115660626]
We propose new MI attacks to utilize the information of augmented data.
We establish the optimal membership inference when the model is trained with augmented data.
arXiv Detail & Related papers (2020-07-21T02:21:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.