DF-LoGiT: Data-Free Logic-Gated Backdoor Attacks in Vision Transformers
- URL: http://arxiv.org/abs/2602.03040v1
- Date: Tue, 03 Feb 2026 03:06:41 GMT
- Title: DF-LoGiT: Data-Free Logic-Gated Backdoor Attacks in Vision Transformers
- Authors: Xiaozuo Shen, Yifei Cai, Rui Ning, Chunsheng Xin, Hongyi Wu,
- Abstract summary: This paper introduces Data-Free Logic-Gated Backdoor Attacks (DF-LoGiT)<n> DF-LoGiT exploits ViT's native multi-head architecture to realize a logic-gated compositional trigger, enabling a stealthy and effective backdoor.<n>We show that DF-LoGiT achieves near-100% attack success with negligible degradation in benign accuracy and remains robust against representative classical and ViT-specific defenses.
- Score: 15.213392274245018
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The widespread adoption of Vision Transformers (ViTs) elevates supply-chain risk on third-party model hubs, where an adversary can implant backdoors into released checkpoints. Existing ViT backdoor attacks largely rely on poisoned-data training, while prior data-free attempts typically require synthetic-data fine-tuning or extra model components. This paper introduces Data-Free Logic-Gated Backdoor Attacks (DF-LoGiT), a truly data-free backdoor attack on ViTs via direct weight editing. DF-LoGiT exploits ViT's native multi-head architecture to realize a logic-gated compositional trigger, enabling a stealthy and effective backdoor. We validate its effectiveness through theoretical analysis and extensive experiments, showing that DF-LoGiT achieves near-100% attack success with negligible degradation in benign accuracy and remains robust against representative classical and ViT-specific defenses.
Related papers
- TrojanTO: Action-Level Backdoor Attacks against Trajectory Optimization Models [67.06525001375722]
TrojanTO is the first action-level backdoor attack against TO models.<n>It implants backdoor attacks across diverse tasks and attack objectives with a low attack budget.<n>TrojanTO exhibits broad applicability to DT, GDT, and DC.
arXiv Detail & Related papers (2025-06-15T11:27:49Z) - ELBA-Bench: An Efficient Learning Backdoor Attacks Benchmark for Large Language Models [55.93380086403591]
Generative large language models are vulnerable to backdoor attacks.<n>$textitELBA-Bench$ allows attackers to inject backdoor through parameter efficient fine-tuning.<n>$textitELBA-Bench$ provides over 1300 experiments.
arXiv Detail & Related papers (2025-02-22T12:55:28Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Using Interleaved Ensemble Unlearning to Keep Backdoors at Bay for Finetuning Vision Transformers [0.0]
Vision Transformers (ViTs) have become popular in computer vision tasks.
Backdoor attacks, which trigger undesirable behaviours in models during inference, threaten ViTs' performance.
We present Interleaved Ensemble Unlearning (IEU), a method for finetuning clean ViTs on backdoored datasets.
arXiv Detail & Related papers (2024-10-01T23:33:59Z) - TrojFM: Resource-efficient Backdoor Attacks against Very Large Foundation Models [69.37990698561299]
TrojFM is a novel backdoor attack tailored for very large foundation models.
Our approach injects backdoors by fine-tuning only a very small proportion of model parameters.
We demonstrate that TrojFM can launch effective backdoor attacks against widely used large GPT-style models.
arXiv Detail & Related papers (2024-05-27T03:10:57Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Backdoor Attacks on Transformers for Tabular Data: An Empirical Study [19.856523769936825]
Attacks on Deep Neural Networks (DNNs) involve the subtle insertion of triggers during model training, allowing for manipulated predictions.<n>In this paper, we propose a novel approach for trigger construction: in-bounds attack, which provides excellent attack performance while maintaining stealthiness.<n>Our results demonstrate up to 100% attack success rate with negligible clean accuracy drop.
arXiv Detail & Related papers (2023-11-13T18:39:44Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Defending Backdoor Attacks on Vision Transformer via Patch Processing [18.50522247164383]
Vision Transformers (ViTs) have a radically different architecture with significantly less inductive bias than Convolutional Neural Networks.
This paper investigates a representative causative attack, i.e., backdoor attacks.
We propose an effective method for ViTs to defend both patch-based and blending-based trigger backdoor attacks via patch processing.
arXiv Detail & Related papers (2022-06-24T17:29:47Z) - DBIA: Data-free Backdoor Injection Attack against Transformer Networks [6.969019759456717]
We propose DBIA, a data-free backdoor attack against the CV-oriented transformer networks.
Our approach can embed backdoors with a high success rate and a low impact on the performance of the victim transformers.
arXiv Detail & Related papers (2021-11-22T08:13:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.