Physical Invisible Backdoor Based on Camera Imaging
- URL: http://arxiv.org/abs/2309.07428v1
- Date: Thu, 14 Sep 2023 04:58:06 GMT
- Title: Physical Invisible Backdoor Based on Camera Imaging
- Authors: Yusheng Guo, Nan Zhong, Zhenxing Qian, and Xinpeng Zhang
- Abstract summary: Current backdoor attacks require changing pixels of clean images.
This paper proposes a novel physical invisible backdoor based on camera imaging without changing nature image pixels.
- Score: 32.30547033643063
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Backdoor attack aims to compromise a model, which returns an adversary-wanted
output when a specific trigger pattern appears yet behaves normally for clean
inputs. Current backdoor attacks require changing pixels of clean images, which
results in poor stealthiness of attacks and increases the difficulty of the
physical implementation. This paper proposes a novel physical invisible
backdoor based on camera imaging without changing nature image pixels.
Specifically, a compromised model returns a target label for images taken by a
particular camera, while it returns correct results for other images. To
implement and evaluate the proposed backdoor, we take shots of different
objects from multi-angles using multiple smartphones to build a new dataset of
21,500 images. Conventional backdoor attacks work ineffectively with some
classical models, such as ResNet18, over the above-mentioned dataset.
Therefore, we propose a three-step training strategy to mount the backdoor
attack. First, we design and train a camera identification model with the phone
IDs to extract the camera fingerprint feature. Subsequently, we elaborate a
special network architecture, which is easily compromised by our backdoor
attack, by leveraging the attributes of the CFA interpolation algorithm and
combining it with the feature extraction block in the camera identification
model. Finally, we transfer the backdoor from the elaborated special network
architecture to the classical architecture model via teacher-student
distillation learning. Since the trigger of our method is related to the
specific phone, our attack works effectively in the physical world. Experiment
results demonstrate the feasibility of our proposed approach and robustness
against various backdoor defenses.
Related papers
- Expose Before You Defend: Unifying and Enhancing Backdoor Defenses via Exposed Models [68.40324627475499]
We introduce a novel two-step defense framework named Expose Before You Defend.
EBYD unifies existing backdoor defense methods into a comprehensive defense system with enhanced performance.
We conduct extensive experiments on 10 image attacks and 6 text attacks across 2 vision datasets and 4 language datasets.
arXiv Detail & Related papers (2024-10-25T09:36:04Z) - Backdoor Attack with Mode Mixture Latent Modification [26.720292228686446]
We propose a backdoor attack paradigm that only requires minimal alterations to a clean model in order to inject the backdoor under the guise of fine-tuning.
We evaluate the effectiveness of our method on four popular benchmark datasets.
arXiv Detail & Related papers (2024-03-12T09:59:34Z) - PatchBackdoor: Backdoor Attack against Deep Neural Networks without
Model Modification [0.0]
Backdoor attack is a major threat to deep learning systems in safety-critical scenarios.
In this paper, we show that backdoor attacks can be achieved without any model modification.
We implement PatchBackdoor in real-world scenarios and show that the attack is still threatening.
arXiv Detail & Related papers (2023-08-22T23:02:06Z) - Chameleon: Adapting to Peer Images for Planting Durable Backdoors in
Federated Learning [4.420110599382241]
We investigate the connection between the durability of FL backdoors and the relationships between benign images and poisoned images.
We propose a novel attack, Chameleon, which utilizes contrastive learning to further amplify such effects towards a more durable backdoor.
arXiv Detail & Related papers (2023-04-25T16:11:10Z) - Look, Listen, and Attack: Backdoor Attacks Against Video Action
Recognition [53.720010650445516]
We show that poisoned-label image backdoor attacks could be extended temporally in two ways, statically and dynamically.
In addition, we explore natural video backdoors to highlight the seriousness of this vulnerability in the video domain.
And, for the first time, we study multi-modal (audiovisual) backdoor attacks against video action recognition models.
arXiv Detail & Related papers (2023-01-03T07:40:28Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.