Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
- URL: http://arxiv.org/abs/2003.08633v1
- Date: Thu, 19 Mar 2020 08:59:50 GMT
- Title: Backdooring and Poisoning Neural Networks with Image-Scaling Attacks
- Authors: Erwin Quiring and Konrad Rieck
- Abstract summary: We propose a novel strategy for hiding backdoor and poisoning attacks.
Our approach builds on a recent class of attacks against image scaling.
We show that backdoors and poisoning work equally well when combined with image-scaling attacks.
- Score: 15.807243762876901
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoors and poisoning attacks are a major threat to the security of
machine-learning and vision systems. Often, however, these attacks leave
visible artifacts in the images that can be visually detected and weaken the
efficacy of the attacks. In this paper, we propose a novel strategy for hiding
backdoor and poisoning attacks. Our approach builds on a recent class of
attacks against image scaling. These attacks enable manipulating images such
that they change their content when scaled to a specific resolution. By
combining poisoning and image-scaling attacks, we can conceal the trigger of
backdoors as well as hide the overlays of clean-label poisoning. Furthermore,
we consider the detection of image-scaling attacks and derive an adaptive
attack. In an empirical evaluation, we demonstrate the effectiveness of our
strategy. First, we show that backdoors and poisoning work equally well when
combined with image-scaling attacks. Second, we demonstrate that current
detection defenses against image-scaling attacks are insufficient to uncover
our manipulations. Overall, our work provides a novel means for hiding traces
of manipulations, being applicable to different poisoning approaches.
Related papers
- SEEP: Training Dynamics Grounds Latent Representation Search for Mitigating Backdoor Poisoning Attacks [53.28390057407576]
Modern NLP models are often trained on public datasets drawn from diverse sources.
Data poisoning attacks can manipulate the model's behavior in ways engineered by the attacker.
Several strategies have been proposed to mitigate the risks associated with backdoor attacks.
arXiv Detail & Related papers (2024-05-19T14:50:09Z) - Impart: An Imperceptible and Effective Label-Specific Backdoor Attack [15.859650783567103]
We propose a novel imperceptible backdoor attack framework, named Impart, in the scenario where the attacker has no access to the victim model.
Specifically, in order to enhance the attack capability of the all-to-all setting, we first propose a label-specific attack.
arXiv Detail & Related papers (2024-03-18T07:22:56Z) - On the Detection of Image-Scaling Attacks in Machine Learning [11.103249083138213]
Image scaling is an integral part of machine learning and computer vision systems.
Image-scaling attacks modifying the entire scaled image can be reliably detected even under an adaptive adversary.
We show that our methods provide strong detection performance even if only minor parts of the image are manipulated.
arXiv Detail & Related papers (2023-10-23T16:46:28Z) - Attention-Enhancing Backdoor Attacks Against BERT-based Models [54.070555070629105]
Investigating the strategies of backdoor attacks will help to understand the model's vulnerability.
We propose a novel Trojan Attention Loss (TAL) which enhances the Trojan behavior by directly manipulating the attention patterns.
arXiv Detail & Related papers (2023-10-23T01:24:56Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Poison Ink: Robust and Invisible Backdoor Attack [122.49388230821654]
We propose a robust and invisible backdoor attack called Poison Ink''
Concretely, we first leverage the image structures as target poisoning areas, and fill them with poison ink (information) to generate the trigger pattern.
Compared to existing popular backdoor attack methods, Poison Ink outperforms both in stealthiness and robustness.
arXiv Detail & Related papers (2021-08-05T09:52:49Z) - Backdoor Attack in the Physical World [49.64799477792172]
Backdoor attack intends to inject hidden backdoor into the deep neural networks (DNNs)
Most existing backdoor attacks adopted the setting of static trigger, $i.e.,$ triggers across the training and testing images.
We demonstrate that this attack paradigm is vulnerable when the trigger in testing images is not consistent with the one used for training.
arXiv Detail & Related papers (2021-04-06T08:37:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.