GraphToxin: Reconstructing Full Unlearned Graphs from Graph Unlearning
- URL: http://arxiv.org/abs/2511.10936v1
- Date: Fri, 14 Nov 2025 03:52:00 GMT
- Title: GraphToxin: Reconstructing Full Unlearned Graphs from Graph Unlearning
- Authors: Ying Song, Balaji Palanisamy,
- Abstract summary: We propose GraphToxin, the first graph reconstruction attack against graph unlearning.<n>We demonstrate that GraphToxin can successfully subvert the regulatory guarantees expected from graph unlearning.<n>Our work underscores the urgent need for the development of more effective and robust defense strategies against this attack.
- Score: 4.838500914184325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph unlearning has emerged as a promising solution for complying with "the right to be forgotten" regulations by enabling the removal of sensitive information upon request. However, this solution is not foolproof. The involvement of multiple parties creates new attack surfaces, and residual traces of deleted data can still remain in the unlearned graph neural networks. These vulnerabilities can be exploited by attackers to recover the supposedly erased samples, thereby undermining the inherent functionality of graph unlearning. In this work, we propose GraphToxin, the first graph reconstruction attack against graph unlearning. Specifically, we introduce a novel curvature matching module to provide a fine-grained guidance for full unlearned graph recovery. We demonstrate that GraphToxin can successfully subvert the regulatory guarantees expected from graph unlearning - it can recover not only a deleted individual's information and personal links but also sensitive content from their connections, thereby posing substantially more detrimental threats. Furthermore, we extend GraphToxin to multiple node removals under both white-box and black-box setting. We highlight the necessity of a worst-case analysis and propose a comprehensive evaluation framework to systematically assess the attack performance under both random and worst-case node removals. This provides a more robust and realistic measure of the vulnerability of graph unlearning methods to graph reconstruction attacks. Our extensive experiments demonstrate the effectiveness and flexibility of GraphToxin. Notably, we show that existing defense mechanisms are largely ineffective against this attack and, in some cases, can even amplify its performance. Given the severe privacy risks posed by GraphToxin, our work underscores the urgent need for the development of more effective and robust defense strategies against this attack.
Related papers
- Inference Attacks Against Graph Generative Diffusion Models [20.384972857911976]
Graph generative diffusion models have emerged as a powerful paradigm for generating complex graph structures.<n>However, the privacy risks associated with these models remain largely unexplored.<n>In this paper, we investigate information leakage in such models through three types of black-box inference attacks.
arXiv Detail & Related papers (2026-01-07T08:38:13Z) - Deceptive Fairness Attacks on Graphs via Meta Learning [102.53029537886314]
We study deceptive fairness attacks on graphs to answer the question: How can we achieve poisoning attacks on a graph learning model to exacerbate the bias deceptively?
We propose a meta learning-based framework named FATE to attack various fairness definitions and graph learning models.
We conduct extensive experimental evaluations on real-world datasets in the task of semi-supervised node classification.
arXiv Detail & Related papers (2023-10-24T09:10:14Z) - Unlearnable Graph: Protecting Graphs from Unauthorized Exploitation [68.59161853439339]
We propose a novel method for generating unlearnable graph examples.
By injecting delusive but imperceptible noise into graphs using our Error-Minimizing Structural Poisoning (EMinS) module, we are able to make the graphs unexploitable.
arXiv Detail & Related papers (2023-03-05T03:30:22Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Finding MNEMON: Reviving Memories of Node Embeddings [39.206574462957136]
We show that an adversary can recover edges with decent accuracy by only gaining access to the node embedding matrix of the original graph.
We demonstrate the effectiveness and applicability of our graph recovery attack through extensive experiments.
arXiv Detail & Related papers (2022-04-14T13:44:26Z) - Unsupervised Graph Poisoning Attack via Contrastive Loss
Back-propagation [18.671374133506838]
We propose a novel unsupervised gradient-based adversarial attack that does not rely on labels for graph contrastive learning.
Our attack outperforms unsupervised baseline attacks and has comparable performance with supervised attacks in multiple downstream tasks.
arXiv Detail & Related papers (2022-01-20T03:32:21Z) - Unbiased Graph Embedding with Biased Graph Observations [52.82841737832561]
We propose a principled new way for obtaining unbiased representations by learning from an underlying bias-free graph.
Based on this new perspective, we propose two complementary methods for uncovering such an underlying graph.
arXiv Detail & Related papers (2021-10-26T18:44:37Z) - Inference Attacks Against Graph Neural Networks [33.19531086886817]
Graph embedding is a powerful tool to solve the graph analytics problem.
While sharing graph embedding is intriguing, the associated privacy risks are unexplored.
We systematically investigate the information leakage of the graph embedding by mounting three inference attacks.
arXiv Detail & Related papers (2021-10-06T10:08:11Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Adversarial Attacks on Graph Neural Networks via Meta Learning [4.139895092509202]
We investigate training time attacks on graph neural networks for node classification perturbing the discrete graph structure.
Our core principle is to use meta-gradients to solve the bilevel problem underlying training-time attacks.
arXiv Detail & Related papers (2019-02-22T09:20:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.