Scalable Backdoor Detection in Neural Networks
- URL: http://arxiv.org/abs/2006.05646v1
- Date: Wed, 10 Jun 2020 04:12:53 GMT
- Title: Scalable Backdoor Detection in Neural Networks
- Authors: Haripriya Harikumar, Vuong Le, Santu Rana, Sourangshu Bhattacharya,
Sunil Gupta, and Svetha Venkatesh
- Abstract summary: Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
- Score: 61.39635364047679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, it has been shown that deep learning models are vulnerable to
Trojan attacks, where an attacker can install a backdoor during training time
to make the resultant model misidentify samples contaminated with a small
trigger patch. Current backdoor detection methods fail to achieve good
detection performance and are computationally expensive. In this paper, we
propose a novel trigger reverse-engineering based approach whose computational
complexity does not scale with the number of labels, and is based on a measure
that is both interpretable and universal across different network and patch
types. In experiments, we observe that our method achieves a perfect score in
separating Trojaned models from pure models, which is an improvement over the
current state-of-the art method.
Related papers
- Solving Trojan Detection Competitions with Linear Weight Classification [1.24275433420322]
We introduce a detector that works remarkably well across many of the existing datasets and domains.
We evaluate this algorithm on a diverse set of Trojan detection benchmarks and domains.
arXiv Detail & Related papers (2024-11-05T19:00:34Z) - Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - TEN-GUARD: Tensor Decomposition for Backdoor Attack Detection in Deep
Neural Networks [3.489779105594534]
We introduce a novel approach to backdoor detection using two tensor decomposition methods applied to network activations.
This has a number of advantages relative to existing detection methods, including the ability to analyze multiple models at the same time.
Results show that our method detects backdoored networks more accurately and efficiently than current state-of-the-art methods.
arXiv Detail & Related papers (2024-01-06T03:08:28Z) - Backdoor Learning on Sequence to Sequence Models [94.23904400441957]
In this paper, we study whether sequence-to-sequence (seq2seq) models are vulnerable to backdoor attacks.
Specifically, we find by only injecting 0.2% samples of the dataset, we can cause the seq2seq model to generate the designated keyword and even the whole sentence.
Extensive experiments on machine translation and text summarization have been conducted to show our proposed methods could achieve over 90% attack success rate on multiple datasets and models.
arXiv Detail & Related papers (2023-05-03T20:31:13Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - Backdoor Defense via Suppressing Model Shortcuts [91.30995749139012]
In this paper, we explore the backdoor mechanism from the angle of the model structure.
We demonstrate that the attack success rate (ASR) decreases significantly when reducing the outputs of some key skip connections.
arXiv Detail & Related papers (2022-11-02T15:39:19Z) - An Adaptive Black-box Backdoor Detection Method for Deep Neural Networks [25.593824693347113]
Deep Neural Networks (DNNs) have demonstrated unprecedented performance across various fields such as medical diagnosis and autonomous driving.
They are identified to be vulnerable to Neural Trojan (NT) attacks that are controlled and activated by stealthy triggers.
We propose a robust and adaptive Trojan detection scheme that inspects whether a pre-trained model has been Trojaned before its deployment.
arXiv Detail & Related papers (2022-04-08T23:41:19Z) - Online Defense of Trojaned Models using Misattributions [18.16378666013071]
This paper proposes a new approach to detecting neural Trojans on Deep Neural Networks during inference.
We evaluate our approach on several benchmarks, including models trained on MNIST, Fashion MNIST, and German Traffic Sign Recognition Benchmark.
arXiv Detail & Related papers (2021-03-29T19:53:44Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.