Chameleon: Adapting to Peer Images for Planting Durable Backdoors in
Federated Learning
- URL: http://arxiv.org/abs/2304.12961v2
- Date: Thu, 25 May 2023 10:45:30 GMT
- Title: Chameleon: Adapting to Peer Images for Planting Durable Backdoors in
Federated Learning
- Authors: Yanbo Dai, Songze Li
- Abstract summary: We investigate the connection between the durability of FL backdoors and the relationships between benign images and poisoned images.
We propose a novel attack, Chameleon, which utilizes contrastive learning to further amplify such effects towards a more durable backdoor.
- Score: 4.420110599382241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a federated learning (FL) system, distributed clients upload their local
models to a central server to aggregate into a global model. Malicious clients
may plant backdoors into the global model through uploading poisoned local
models, causing images with specific patterns to be misclassified into some
target labels. Backdoors planted by current attacks are not durable, and vanish
quickly once the attackers stop model poisoning. In this paper, we investigate
the connection between the durability of FL backdoors and the relationships
between benign images and poisoned images (i.e., the images whose labels are
flipped to the target label during local training). Specifically, benign images
with the original and the target labels of the poisoned images are found to
have key effects on backdoor durability. Consequently, we propose a novel
attack, Chameleon, which utilizes contrastive learning to further amplify such
effects towards a more durable backdoor. Extensive experiments demonstrate that
Chameleon significantly extends the backdoor lifespan over baselines by
$1.2\times \sim 4\times$, for a wide range of image datasets, backdoor types,
and model architectures.
Related papers
- EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Clean-image Backdoor Attacks [34.051173092777844]
We propose clean-image backdoor attacks which uncover that backdoors can still be injected via a fraction of incorrect labels.
In our attacks, the attacker first seeks a trigger feature to divide the training images into two parts.
The backdoor will be finally implanted into the target model after it is trained on the poisoned data.
arXiv Detail & Related papers (2024-03-22T07:47:13Z) - Backdoor Attack with Mode Mixture Latent Modification [26.720292228686446]
We propose a backdoor attack paradigm that only requires minimal alterations to a clean model in order to inject the backdoor under the guise of fine-tuning.
We evaluate the effectiveness of our method on four popular benchmark datasets.
arXiv Detail & Related papers (2024-03-12T09:59:34Z) - Physical Invisible Backdoor Based on Camera Imaging [32.30547033643063]
Current backdoor attacks require changing pixels of clean images.
This paper proposes a novel physical invisible backdoor based on camera imaging without changing nature image pixels.
arXiv Detail & Related papers (2023-09-14T04:58:06Z) - One-to-Multiple Clean-Label Image Camouflage (OmClic) based Backdoor Attack on Deep Learning [15.118652632054392]
One attack/poisoned image can only fit a single input size of the DL model.
This work proposes to constructively craft an attack image through camouflaging but can fit multiple DL models' input sizes simultaneously.
Through OmClic, we are able to always implant a backdoor regardless of which common input size is chosen by the user.
arXiv Detail & Related papers (2023-09-07T22:13:14Z) - Protect Federated Learning Against Backdoor Attacks via Data-Free
Trigger Generation [25.072791779134]
Federated Learning (FL) enables large-scale clients to collaboratively train a model without sharing their raw data.
Due to the lack of data auditing for untrusted clients, FL is vulnerable to poisoning attacks, especially backdoor attacks.
We propose a novel data-free trigger-generation-based defense approach based on the two characteristics of backdoor attacks.
arXiv Detail & Related papers (2023-08-22T10:16:12Z) - Mask and Restore: Blind Backdoor Defense at Test Time with Masked
Autoencoder [57.739693628523]
We propose a framework for blind backdoor defense with Masked AutoEncoder (BDMAE)
BDMAE detects possible triggers in the token space using image structural similarity and label consistency between the test image and MAE restorations.
Our approach is blind to the model restorations, trigger patterns and image benignity.
arXiv Detail & Related papers (2023-03-27T19:23:33Z) - Just Rotate it: Deploying Backdoor Attacks via Rotation Transformation [48.238349062995916]
We find that highly effective backdoors can be easily inserted using rotation-based image transformation.
Our work highlights a new, simple, physically realizable, and highly effective vector for backdoor attacks.
arXiv Detail & Related papers (2022-07-22T00:21:18Z) - Backdoor Attack on Hash-based Image Retrieval via Clean-label Data
Poisoning [54.15013757920703]
We propose the confusing perturbations-induced backdoor attack (CIBA)
It injects a small number of poisoned images with the correct label into the training data.
We have conducted extensive experiments to verify the effectiveness of our proposed CIBA.
arXiv Detail & Related papers (2021-09-18T07:56:59Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - Clean-Label Backdoor Attacks on Video Recognition Models [87.46539956587908]
We show that image backdoor attacks are far less effective on videos.
We propose the use of a universal adversarial trigger as the backdoor trigger to attack video recognition models.
Our proposed backdoor attack is resistant to state-of-the-art backdoor defense/detection methods.
arXiv Detail & Related papers (2020-03-06T04:51:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.