TransTroj: Transferable Backdoor Attacks to Pre-trained Models via
Embedding Indistinguishability
- URL: http://arxiv.org/abs/2401.15883v1
- Date: Mon, 29 Jan 2024 04:35:48 GMT
- Title: TransTroj: Transferable Backdoor Attacks to Pre-trained Models via
Embedding Indistinguishability
- Authors: Hao Wang, Tao Xiang, Shangwei Guo, Jialing He, Hangcheng Liu, Tianwei
Zhang
- Abstract summary: We propose a novel transferable backdoor attack, TransTroj, to simultaneously meet functionality-preserving, durable, and task-agnostic.
Experimental results show that TransTroj significantly outperforms SOTA task-agnostic backdoor attacks.
- Score: 65.21878718144663
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Pre-trained models (PTMs) are extensively utilized in various downstream
tasks. Adopting untrusted PTMs may suffer from backdoor attacks, where the
adversary can compromise the downstream models by injecting backdoors into the
PTM. However, existing backdoor attacks to PTMs can only achieve partially
task-agnostic and the embedded backdoors are easily erased during the
fine-tuning process. In this paper, we propose a novel transferable backdoor
attack, TransTroj, to simultaneously meet functionality-preserving, durable,
and task-agnostic. In particular, we first formalize transferable backdoor
attacks as the indistinguishability problem between poisoned and clean samples
in the embedding space. We decompose the embedding indistinguishability into
pre- and post-indistinguishability, representing the similarity of the poisoned
and reference embeddings before and after the attack. Then, we propose a
two-stage optimization that separately optimizes triggers and victim PTMs to
achieve embedding indistinguishability. We evaluate TransTroj on four PTMs and
six downstream tasks. Experimental results show that TransTroj significantly
outperforms SOTA task-agnostic backdoor attacks (18%$\sim$99%, 68% on average)
and exhibits superior performance under various system settings. The code is
available at https://github.com/haowang-cqu/TransTroj .
Related papers
- The Ripple Effect: On Unforeseen Complications of Backdoor Attacks [37.537336956451576]
We demonstrate the widespread presence of backdoor complications in downstream models fine-tuned from backdoored PTLMs.<n>We propose a backdoor complication reduction method leveraging multi-task learning to mitigate complications without prior knowledge of downstream tasks.
arXiv Detail & Related papers (2025-05-16T17:59:53Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Defending Against Weight-Poisoning Backdoor Attacks for Parameter-Efficient Fine-Tuning [57.50274256088251]
We show that parameter-efficient fine-tuning (PEFT) is more susceptible to weight-poisoning backdoor attacks.
We develop a Poisoned Sample Identification Module (PSIM) leveraging PEFT, which identifies poisoned samples through confidence.
We conduct experiments on text classification tasks, five fine-tuning strategies, and three weight-poisoning backdoor attack methods.
arXiv Detail & Related papers (2024-02-19T14:22:54Z) - Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Towards Stable Backdoor Purification through Feature Shift Tuning [22.529990213795216]
Deep neural networks (DNN) are vulnerable to backdoor attacks.
In this paper, we start with fine-tuning, one of the most common and easy-to-deploy backdoor defenses.
We introduce Feature Shift Tuning (FST), a method for tuning-based backdoor purification.
arXiv Detail & Related papers (2023-10-03T08:25:32Z) - Backdoor Mitigation by Correcting the Distribution of Neural Activations [30.554700057079867]
Backdoor (Trojan) attacks are an important type of adversarial exploit against deep neural networks (DNNs)
We analyze an important property of backdoor attacks: a successful attack causes an alteration in the distribution of internal layer activations for backdoor-trigger instances.
We propose an efficient and effective method that achieves post-training backdoor mitigation by correcting the distribution alteration.
arXiv Detail & Related papers (2023-08-18T22:52:29Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial
Examples to Triggered Samples [15.57457705138278]
We propose a new Adversarial Fine-Tuning (AFT) approach to erase backdoor triggers.
AFT can effectively erase the backdoor triggers without obvious performance degradation on clean samples.
arXiv Detail & Related papers (2022-02-13T13:41:15Z) - Backdoor Pre-trained Models Can Transfer to All [33.720258110911274]
We propose a new approach to map the inputs containing triggers directly to a predefined output representation of pre-trained NLP models.
In light of the unique properties of triggers in NLP, we propose two new metrics to measure the performance of backdoor attacks.
arXiv Detail & Related papers (2021-10-30T07:11:24Z) - Red Alarm for Pre-trained Models: Universal Vulnerability to
Neuron-Level Backdoor Attacks [98.15243373574518]
Pre-trained models (PTMs) have been widely used in various downstream tasks.
In this work, we demonstrate the universal vulnerability of PTMs, where fine-tuned PTMs can be easily controlled by backdoor attacks.
arXiv Detail & Related papers (2021-01-18T10:18:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.