Backdoor Attacks on Discrete Graph Diffusion Models
- URL: http://arxiv.org/abs/2503.06340v1
- Date: Sat, 08 Mar 2025 21:01:15 GMT
- Title: Backdoor Attacks on Discrete Graph Diffusion Models
- Authors: Jiawen Wang, Samin Karim, Yuan Hong, Binghui Wang,
- Abstract summary: We study graph diffusion models against backdoor attacks, a severe attack that manipulates both the training and inference/generation phases.<n>We first define the threat model, under which we design the attack such that the backdoored graph diffusion model can generate 1) high-quality graphs without backdoor activation, 2) effective, stealthy, and persistent backdoored graphs with backdoor activation, and 3) graphs that are permutation invariant and exchangeable--two core properties in graph generative models.
- Score: 23.649243273191605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models are powerful generative models in continuous data domains such as image and video data. Discrete graph diffusion models (DGDMs) have recently extended them for graph generation, which are crucial in fields like molecule and protein modeling, and obtained the SOTA performance. However, it is risky to deploy DGDMs for safety-critical applications (e.g., drug discovery) without understanding their security vulnerabilities. In this work, we perform the first study on graph diffusion models against backdoor attacks, a severe attack that manipulates both the training and inference/generation phases in graph diffusion models. We first define the threat model, under which we design the attack such that the backdoored graph diffusion model can generate 1) high-quality graphs without backdoor activation, 2) effective, stealthy, and persistent backdoored graphs with backdoor activation, and 3) graphs that are permutation invariant and exchangeable--two core properties in graph generative models. 1) and 2) are validated via empirical evaluations without and with backdoor defenses, while 3) is validated via theoretical results.
Related papers
- Inference Attacks Against Graph Generative Diffusion Models [20.384972857911976]
Graph generative diffusion models have emerged as a powerful paradigm for generating complex graph structures.<n>However, the privacy risks associated with these models remain largely unexplored.<n>In this paper, we investigate information leakage in such models through three types of black-box inference attacks.
arXiv Detail & Related papers (2026-01-07T08:38:13Z) - BadGraph: A Backdoor Attack Against Latent Diffusion Model for Text-Guided Graph Generation [0.3736462499137869]
This paper proposes BadGraph, a backdoor attack method against latent diffusion models for text-guided graph generation.<n>Experiments on four benchmark datasets demonstrate the effectiveness and stealth of the attack.
arXiv Detail & Related papers (2025-10-23T17:54:17Z) - Graph Defense Diffusion Model [26.41730982598055]
Graph Neural Networks (GNNs) are highly vulnerable to adversarial attacks, which can greatly degrade their performance.<n>Existing graph purification methods attempt to address this issue by filtering attacked graphs.<n>We propose a more versatile approach for defending against adversarial attacks on graphs.
arXiv Detail & Related papers (2025-01-20T16:18:40Z) - Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models [8.318114584158165]
We propose GraphProt, which allows resource-constrained business owners to rely on third parties to avoid backdoor attacks.
Our GraphProt is model-agnostic and only relies on the input graph.
Experimental results across three backdoor attacks and six benchmark datasets demonstrate that GraphProt significantly reduces the backdoor attack success rate.
arXiv Detail & Related papers (2024-10-07T11:04:38Z) - Advancing Graph Generation through Beta Diffusion [49.49740940068255]
Graph Beta Diffusion (GBD) is a generative model specifically designed to handle the diverse nature of graph data.
We propose a modulation technique that enhances the realism of generated graphs by stabilizing critical graph topology.
arXiv Detail & Related papers (2024-06-13T17:42:57Z) - Model X-ray:Detecting Backdoored Models via Decision Boundary [62.675297418960355]
Backdoor attacks pose a significant security vulnerability for deep neural networks (DNNs)
We propose Model X-ray, a novel backdoor detection approach based on the analysis of illustrated two-dimensional (2D) decision boundaries.
Our approach includes two strategies focused on the decision areas dominated by clean samples and the concentration of label distribution.
arXiv Detail & Related papers (2024-02-27T12:42:07Z) - Generative Diffusion Models on Graphs: Methods and Applications [50.44334458963234]
Diffusion models, as a novel generative paradigm, have achieved remarkable success in various image generation tasks.
Graph generation is a crucial computational task on graphs with numerous real-world applications.
arXiv Detail & Related papers (2023-02-06T06:58:17Z) - Resisting Graph Adversarial Attack via Cooperative Homophilous
Augmentation [60.50994154879244]
Recent studies show that Graph Neural Networks are vulnerable and easily fooled by small perturbations.
In this work, we focus on the emerging but critical attack, namely, Graph Injection Attack.
We propose a general defense framework CHAGNN against GIA through cooperative homophilous augmentation of graph data and model.
arXiv Detail & Related papers (2022-11-15T11:44:31Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Neighboring Backdoor Attacks on Graph Convolutional Network [30.586278223198086]
We propose a new type of backdoor which is specific to graph data, called neighboring backdoor.
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
arXiv Detail & Related papers (2022-01-17T03:49:32Z) - Reinforcement Learning-based Black-Box Evasion Attacks to Link
Prediction in Dynamic Graphs [87.5882042724041]
Link prediction in dynamic graphs (LPDG) is an important research problem that has diverse applications.
We study the vulnerability of LPDG methods and propose the first practical black-box evasion attack.
arXiv Detail & Related papers (2020-09-01T01:04:49Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.