BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
Learning Models
- URL: http://arxiv.org/abs/2010.03007v2
- Date: Thu, 8 Oct 2020 07:28:17 GMT
- Title: BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
Learning Models
- Authors: Ahmed Salem, Yannick Sautter, Michael Backes, Mathias Humbert, Yang
Zhang
- Abstract summary: We explore one of the most severe attacks against machine learning models, namely the backdoor attack, against both autoencoders and GANs.
The backdoor attack is a training time attack where the adversary implements a hidden backdoor in the target model that can only be activated by a secret trigger.
We extend the applicability of backdoor attacks to autoencoders and GAN-based models.
- Score: 21.06679566096713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The tremendous progress of autoencoders and generative adversarial networks
(GANs) has led to their application to multiple critical tasks, such as fraud
detection and sanitized data generation. This increasing adoption has fostered
the study of security and privacy risks stemming from these models. However,
previous works have mainly focused on membership inference attacks. In this
work, we explore one of the most severe attacks against machine learning
models, namely the backdoor attack, against both autoencoders and GANs. The
backdoor attack is a training time attack where the adversary implements a
hidden backdoor in the target model that can only be activated by a secret
trigger. State-of-the-art backdoor attacks focus on classification-based tasks.
We extend the applicability of backdoor attacks to autoencoders and GAN-based
models. More concretely, we propose the first backdoor attack against
autoencoders and GANs where the adversary can control what the decoded or
generated images are when the backdoor is activated. Our results show that the
adversary can build a backdoored autoencoder that returns a target output for
all backdoored inputs, while behaving perfectly normal on clean inputs.
Similarly, for the GANs, our experiments show that the adversary can generate
data from a different distribution when the backdoor is activated, while
maintaining the same utility when the backdoor is not.
Related papers
- Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - Universal Soldier: Using Universal Adversarial Perturbations for
Detecting Backdoor Attacks [15.917794562400449]
A deep learning model may be poisoned by training with backdoored data or by modifying inner network parameters.
It is difficult to distinguish between clean and backdoored models without prior knowledge of the trigger.
We propose a novel method called Universal Soldier for Backdoor detection (USB) and reverse engineering potential backdoor triggers via UAPs.
arXiv Detail & Related papers (2023-02-01T20:47:58Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.