Topological Detection of Trojaned Neural Networks
- URL: http://arxiv.org/abs/2106.06469v1
- Date: Fri, 11 Jun 2021 15:48:16 GMT
- Title: Topological Detection of Trojaned Neural Networks
- Authors: Songzhu Zheng, Yikai Zhang, Hubert Wagner, Mayank Goswami, Chao Chen
- Abstract summary: Trojan attacks occur when attackers stealthily manipulate the model's behavior.
We find subtle structural deviation characterizing Trojaned models.
We devise a strategy for robust detection of Trojaned models.
- Score: 10.559903139528252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks are known to have security issues. One particular threat
is the Trojan attack. It occurs when the attackers stealthily manipulate the
model's behavior through Trojaned training samples, which can later be
exploited.
Guided by basic neuroscientific principles we discover subtle -- yet critical
-- structural deviation characterizing Trojaned models. In our analysis we use
topological tools. They allow us to model high-order dependencies in the
networks, robustly compare different networks, and localize structural
abnormalities. One interesting observation is that Trojaned models develop
short-cuts from input to output layers.
Inspired by these observations, we devise a strategy for robust detection of
Trojaned models. Compared to standard baselines it displays better performance
on multiple benchmarks.
Related papers
- Trojan Cleansing with Neural Collapse [18.160116254921608]
Trojan attacks are sophisticated training-time attacks on neural networks that embed backdoor triggers.
We provide experimental evidence that trojan attacks disrupt this convergence for a variety of datasets and architectures.
We then use this disruption to design a lightweight, broadly generalizable mechanism for cleansing trojan attacks.
arXiv Detail & Related papers (2024-11-19T22:57:40Z) - Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - FreeEagle: Detecting Complex Neural Trojans in Data-Free Cases [50.065022493142116]
Trojan attack on deep neural networks, also known as backdoor attack, is a typical threat to artificial intelligence.
FreeEagle is the first data-free backdoor detection method that can effectively detect complex backdoor attacks.
arXiv Detail & Related papers (2023-02-28T11:31:29Z) - CatchBackdoor: Backdoor Detection via Critical Trojan Neural Path Fuzzing [16.44147178061005]
trojaned behaviors triggered by various trojan attacks can be attributed to the trojan path.
We propose CatchBackdoor, a detection method against trojan attacks.
arXiv Detail & Related papers (2021-12-24T13:57:03Z) - Cassandra: Detecting Trojaned Networks from Adversarial Perturbations [92.43879594465422]
In many cases, pre-trained models are sourced from vendors who may have disrupted the training pipeline to insert Trojan behaviors into the models.
We propose a method to verify if a pre-trained model is Trojaned or benign.
Our method captures fingerprints of neural networks in the form of adversarial perturbations learned from the network gradients.
arXiv Detail & Related papers (2020-07-28T19:00:40Z) - Odyssey: Creation, Analysis and Detection of Trojan Models [91.13959405645959]
Trojan attacks interfere with the training pipeline by inserting triggers into some of the training samples and trains the model to act maliciously only for samples that contain the trigger.
Existing Trojan detectors make strong assumptions about the types of triggers and attacks.
We propose a detector that is based on the analysis of the intrinsic properties; that are affected due to the Trojaning process.
arXiv Detail & Related papers (2020-07-16T06:55:00Z) - An Embarrassingly Simple Approach for Trojan Attack in Deep Neural
Networks [59.42357806777537]
trojan attack aims to attack deployed deep neural networks (DNNs) relying on hidden trigger patterns inserted by hackers.
We propose a training-free attack approach which is different from previous work, in which trojaned behaviors are injected by retraining model on a poisoned dataset.
The proposed TrojanNet has several nice properties including (1) it activates by tiny trigger patterns and keeps silent for other signals, (2) it is model-agnostic and could be injected into most DNNs, dramatically expanding its attack scenarios, and (3) the training-free mechanism saves massive training efforts compared to conventional trojan attack methods.
arXiv Detail & Related papers (2020-06-15T04:58:28Z) - Scalable Backdoor Detection in Neural Networks [61.39635364047679]
Deep learning models are vulnerable to Trojan attacks, where an attacker can install a backdoor during training time to make the resultant model misidentify samples contaminated with a small trigger patch.
We propose a novel trigger reverse-engineering based approach whose computational complexity does not scale with the number of labels, and is based on a measure that is both interpretable and universal across different network and patch types.
In experiments, we observe that our method achieves a perfect score in separating Trojaned models from pure models, which is an improvement over the current state-of-the art method.
arXiv Detail & Related papers (2020-06-10T04:12:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.