A temporal chrominance trigger for clean-label backdoor attack against
anti-spoof rebroadcast detection
- URL: http://arxiv.org/abs/2206.01102v1
- Date: Thu, 2 Jun 2022 15:30:42 GMT
- Title: A temporal chrominance trigger for clean-label backdoor attack against
anti-spoof rebroadcast detection
- Authors: Wei Guo, Benedetta Tondi, Mauro Barni
- Abstract summary: We propose a stealthy clean-label video backdoor attack against Deep Learning (DL)-based models.
The injected backdoor does not affect spoofing detection in normal conditions, but induces a misclassification in the presence of a triggering signal.
The effectiveness of the proposed backdoor attack and its generality are validated experimentally on different datasets.
- Score: 41.735725886912185
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a stealthy clean-label video backdoor attack against Deep Learning
(DL)-based models aiming at detecting a particular class of spoofing attacks,
namely video rebroadcast attacks. The injected backdoor does not affect
spoofing detection in normal conditions, but induces a misclassification in the
presence of a specific triggering signal. The proposed backdoor relies on a
temporal trigger altering the average chrominance of the video sequence. The
backdoor signal is designed by taking into account the peculiarities of the
Human Visual System (HVS) to reduce the visibility of the trigger, thus
increasing the stealthiness of the backdoor. To force the network to look at
the presence of the trigger in the challenging clean-label scenario, we choose
the poisoned samples used for the injection of the backdoor following a
so-called Outlier Poisoning Strategy (OPS). According to OPS, the triggering
signal is inserted in the training samples that the network finds more
difficult to classify. The effectiveness of the proposed backdoor attack and
its generality are validated experimentally on different datasets and
anti-spoofing rebroadcast detection architectures.
Related papers
- Twin Trigger Generative Networks for Backdoor Attacks against Object Detection [14.578800906364414]
Object detectors, which are widely used in real-world applications, are vulnerable to backdoor attacks.
Most research on backdoor attacks has focused on image classification, with limited investigation into object detection.
We propose novel twin trigger generative networks to generate invisible triggers for implanting backdoors into models during training, and visible triggers for steady activation during inference.
arXiv Detail & Related papers (2024-11-23T03:46:45Z) - UltraClean: A Simple Framework to Train Robust Neural Networks against Backdoor Attacks [19.369701116838776]
Backdoor attacks are emerging threats to deep neural networks.
They typically embed malicious behaviors into a victim model by injecting poisoned samples.
We propose UltraClean, a framework that simplifies the identification of poisoned samples.
arXiv Detail & Related papers (2023-12-17T09:16:17Z) - Temporal-Distributed Backdoor Attack Against Video Based Action
Recognition [21.916002204426853]
We introduce a simple yet effective backdoor attack against video data.
Our proposed attack, adding perturbations in a transformed domain, plants an imperceptible, temporally distributed trigger across the video frames.
arXiv Detail & Related papers (2023-08-21T22:31:54Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - A semantic backdoor attack against Graph Convolutional Networks [0.0]
A semantic backdoor attack is a new type of backdoor attack on deep neural networks (DNNs)
We propose a semantic backdoor attack against Graph convolutional networks (GCNs) to reveal the existence of this security vulnerability in GCNs.
arXiv Detail & Related papers (2023-02-28T07:11:55Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Adversarial Fine-tuning for Backdoor Defense: Connect Adversarial
Examples to Triggered Samples [15.57457705138278]
We propose a new Adversarial Fine-Tuning (AFT) approach to erase backdoor triggers.
AFT can effectively erase the backdoor triggers without obvious performance degradation on clean samples.
arXiv Detail & Related papers (2022-02-13T13:41:15Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Rethinking the Trigger of Backdoor Attack [83.98031510668619]
Currently, most of existing backdoor attacks adopted the setting of emphstatic trigger, $i.e.,$ triggers across the training and testing images follow the same appearance and are located in the same area.
We demonstrate that such an attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2020-04-09T17:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.