Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models
- URL: http://arxiv.org/abs/2410.04916v1
- Date: Mon, 7 Oct 2024 11:04:38 GMT
- Title: Defense-as-a-Service: Black-box Shielding against Backdoored Graph Models
- Authors: Xiao Yang, Kai Zhou, Yuni Lai, Gaolei Li,
- Abstract summary: We propose GraphProt, which allows resource-constrained business owners to rely on third parties to avoid backdoor attacks.
Our GraphProt is model-agnostic and only relies on the input graph.
Experimental results across three backdoor attacks and six benchmark datasets demonstrate that GraphProt significantly reduces the backdoor attack success rate.
- Score: 8.318114584158165
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With the trend of large graph learning models, business owners tend to employ a model provided by a third party to deliver business services to users. However, these models might be backdoored, and malicious users can submit trigger-embedded inputs to manipulate the model predictions. Current graph backdoor defenses have several limitations: 1) depending on model-related details, 2) requiring additional model fine-tuning, and 3) relying upon extra explainability tools, all of which are infeasible under stringent privacy policies. To address those limitations, we propose GraphProt, which allows resource-constrained business owners to rely on third parties to avoid backdoor attacks on GNN-based graph classifiers. Our GraphProt is model-agnostic and only relies on the input graph. The key insight is to leverage subgraph information for prediction, thereby mitigating backdoor effects induced by triggers. GraphProt comprises two components: clustering-based trigger elimination and robust subgraph ensemble. Specifically, we first propose feature-topology clustering that aims to remove most of the anomalous subgraphs (triggers). Moreover, we design subgraph sampling strategies based on feature-topology clustering to build a robust classifier via majority vote. Experimental results across three backdoor attacks and six benchmark datasets demonstrate that GraphProt significantly reduces the backdoor attack success rate while preserving the model accuracy on regular graph classification tasks.
Related papers
- Cross-Paradigm Graph Backdoor Attacks with Promptable Subgraph Triggers [49.77729302007601]
Graph Neural Networks (GNNs) are vulnerable to backdoor attacks, where adversaries implant malicious triggers to manipulate model predictions.<n>Existing trigger generators are often simplistic in structure and overly reliant on specific features, confining them to a single graph learning paradigm.<n>We propose Cross-Paradigm Graph Backdoor Attacks with Promptable Subgraph Triggers(CP-GBA), a new transferable graph backdoor attack.
arXiv Detail & Related papers (2025-10-26T07:10:07Z) - WGLE:Backdoor-free and Multi-bit Black-box Watermarking for Graph Neural Networks [2.3612692427322313]
We propose WGLE, a novel black-box watermarking paradigm for Graph Neural Networks (GNNs)<n>WGLE embeds the watermark encoding the intended information without introducing incorrect mappings that compromise the primary task.<n>Results show that WGLE achieves 100% ownership verification accuracy, an average fidelity of 0.85%, comparable against potential attacks, and low embedding overhead.
arXiv Detail & Related papers (2025-06-10T09:12:00Z) - An Automatic Graph Construction Framework based on Large Language Models for Recommendation [49.51799417575638]
We introduce AutoGraph, an automatic graph construction framework based on large language models for recommendation.
LLMs infer the user preference and item knowledge, which is encoded as semantic vectors.
Latent factors are incorporated as extra nodes to link the user/item nodes, resulting in a graph with in-depth global-view semantics.
arXiv Detail & Related papers (2024-12-24T07:51:29Z) - Privacy-Preserved Neural Graph Similarity Learning [99.78599103903777]
We propose a novel Privacy-Preserving neural Graph Matching network model, named PPGM, for graph similarity learning.
To prevent reconstruction attacks, the proposed model does not communicate node-level representations between devices.
To alleviate the attacks to graph properties, the obfuscated features that contain information from both vectors are communicated.
arXiv Detail & Related papers (2022-10-21T04:38:25Z) - Model Inversion Attacks against Graph Neural Networks [65.35955643325038]
We study model inversion attacks against Graph Neural Networks (GNNs)
In this paper, we present GraphMI to infer the private training graph data.
Our experimental results show that such defenses are not sufficiently effective and call for more advanced defenses against privacy attacks.
arXiv Detail & Related papers (2022-09-16T09:13:43Z) - Neighboring Backdoor Attacks on Graph Convolutional Network [30.586278223198086]
We propose a new type of backdoor which is specific to graph data, called neighboring backdoor.
To address such a challenge, we set the trigger as a single node, and the backdoor is activated when the trigger node is connected to the target node.
arXiv Detail & Related papers (2022-01-17T03:49:32Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Adversarial Attack Framework on Graph Embedding Models with Limited
Knowledge [126.32842151537217]
Existing works usually perform the attack in a white-box fashion.
We demand to attack various kinds of graph embedding models with black-box driven.
We prove that GF-Attack can perform an effective attack without knowing the number of layers of graph embedding models.
arXiv Detail & Related papers (2021-05-26T09:18:58Z) - Query-free Black-box Adversarial Attacks on Graphs [37.88689315688314]
We propose a query-free black-box adversarial attack on graphs, in which the attacker has no knowledge of the target model and no query access to the model.
We prove that the impact of the flipped links on the target model can be quantified by spectral changes, and thus be approximated using the eigenvalue perturbation theory.
Due to its simplicity and scalability, the proposed model is not only generic in various graph-based models, but can be easily extended when different knowledge levels are accessible as well.
arXiv Detail & Related papers (2020-12-12T08:52:56Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z) - Adversarial Attack on Community Detection by Hiding Individuals [68.76889102470203]
We focus on black-box attack and aim to hide targeted individuals from the detection of deep graph community detection models.
We propose an iterative learning framework that takes turns to update two modules: one working as the constrained graph generator and the other as the surrogate community detection model.
arXiv Detail & Related papers (2020-01-22T09:50:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.