Technical Report: Assisting Backdoor Federated Learning with Whole
Population Knowledge Alignment
- URL: http://arxiv.org/abs/2207.12327v1
- Date: Mon, 25 Jul 2022 16:38:31 GMT
- Title: Technical Report: Assisting Backdoor Federated Learning with Whole
Population Knowledge Alignment
- Authors: Tian Liu, Xueyang Hu, Tao Shu
- Abstract summary: Single-shot backdoor attack achieves high accuracy on both the main task and backdoor sub-task when injected at the FL model convergence.
We propose a two-phase backdoor attack, which includes a preliminary phase for the subsequent backdoor attack.
Benefiting from the preliminary phase, the later injected backdoor achieves better effectiveness as the backdoor effect will be less likely to be diluted by the normal model updates.
- Score: 4.87359365320076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the distributed nature of Federated Learning (FL), researchers have
uncovered that FL is vulnerable to backdoor attacks, which aim at injecting a
sub-task into the FL without corrupting the performance of the main task.
Single-shot backdoor attack achieves high accuracy on both the main task and
backdoor sub-task when injected at the FL model convergence. However, the
early-injected single-shot backdoor attack is ineffective because: (1) the
maximum backdoor effectiveness is not reached at injection because of the
dilution effect from normal local updates; (2) the backdoor effect decreases
quickly as the backdoor will be overwritten by the newcoming normal local
updates. In this paper, we strengthen the early-injected single-shot backdoor
attack utilizing FL model information leakage. We show that the FL convergence
can be expedited if the client trains on a dataset that mimics the distribution
and gradients of the whole population. Based on this observation, we proposed a
two-phase backdoor attack, which includes a preliminary phase for the
subsequent backdoor attack. In the preliminary phase, the attacker-controlled
client first launches a whole population distribution inference attack and then
trains on a locally crafted dataset that is aligned with both the gradient and
inferred distribution. Benefiting from the preliminary phase, the later
injected backdoor achieves better effectiveness as the backdoor effect will be
less likely to be diluted by the normal model updates. Extensive experiments
are conducted on MNIST dataset under various data heterogeneity settings to
evaluate the effectiveness of the proposed backdoor attack. Results show that
the proposed backdoor outperforms existing backdoor attacks in both success
rate and longevity, even when defense mechanisms are in place.
Related papers
- BadSFL: Backdoor Attack against Scaffold Federated Learning [16.104941796138128]
Federated learning (FL) enables the training of deep learning models on distributed clients to preserve data privacy.
BadSFL is a novel backdoor attack method designed for the FL framework using the scaffold aggregation algorithm in non-IID settings.
BadSFL is effective over 60 rounds in the global model and up to 3 times longer than existing baseline attacks.
arXiv Detail & Related papers (2024-11-25T07:46:57Z) - Long-Tailed Backdoor Attack Using Dynamic Data Augmentation Operations [50.1394620328318]
Existing backdoor attacks mainly focus on balanced datasets.
We propose an effective backdoor attack named Dynamic Data Augmentation Operation (D$2$AO)
Our method can achieve the state-of-the-art attack performance while preserving the clean accuracy.
arXiv Detail & Related papers (2024-10-16T18:44:22Z) - Efficient Backdoor Defense in Multimodal Contrastive Learning: A Token-Level Unlearning Method for Mitigating Threats [52.94388672185062]
We propose an efficient defense mechanism against backdoor threats using a concept known as machine unlearning.
This entails strategically creating a small set of poisoned samples to aid the model's rapid unlearning of backdoor vulnerabilities.
In the backdoor unlearning process, we present a novel token-based portion unlearning training regime.
arXiv Detail & Related papers (2024-09-29T02:55:38Z) - T2IShield: Defending Against Backdoors on Text-to-Image Diffusion Models [70.03122709795122]
We propose a comprehensive defense method named T2IShield to detect, localize, and mitigate backdoor attacks.
We find the "Assimilation Phenomenon" on the cross-attention maps caused by the backdoor trigger.
For backdoor sample detection, T2IShield achieves a detection F1 score of 88.9$%$ with low computational cost.
arXiv Detail & Related papers (2024-07-05T01:53:21Z) - Concealing Backdoor Model Updates in Federated Learning by Trigger-Optimized Data Poisoning [20.69655306650485]
Federated Learning (FL) is a decentralized machine learning method that enables participants to collaboratively train a model without sharing their private data.
Despite its privacy and scalability benefits, FL is susceptible to backdoor attacks.
We propose DPOT, a backdoor attack strategy in FL that dynamically constructs backdoor objectives by optimizing a backdoor trigger.
arXiv Detail & Related papers (2024-05-10T02:44:25Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - FedGrad: Mitigating Backdoor Attacks in Federated Learning Through Local
Ultimate Gradients Inspection [3.3711670942444014]
Federated learning (FL) enables multiple clients to train a model without compromising sensitive data.
The decentralized nature of FL makes it susceptible to adversarial attacks, especially backdoor insertion during training.
We propose FedGrad, a backdoor-resistant defense for FL that is resistant to cutting-edge backdoor attacks.
arXiv Detail & Related papers (2023-04-29T19:31:44Z) - Confidence Matters: Inspecting Backdoors in Deep Neural Networks via
Distribution Transfer [27.631616436623588]
We propose a backdoor defense DTInspector built upon a new observation.
DTInspector learns a patch that could change the predictions of most high-confidence data, and then decides the existence of backdoor.
arXiv Detail & Related papers (2022-08-13T08:16:28Z) - Neurotoxin: Durable Backdoors in Federated Learning [73.82725064553827]
federated learning systems have an inherent vulnerability during their training to adversarial backdoor attacks.
We propose Neurotoxin, a simple one-line modification to existing backdoor attacks that acts by attacking parameters that are changed less in magnitude during training.
arXiv Detail & Related papers (2022-06-12T16:52:52Z) - Black-box Detection of Backdoor Attacks with Limited Information and
Data [56.0735480850555]
We propose a black-box backdoor detection (B3D) method to identify backdoor attacks with only query access to the model.
In addition to backdoor detection, we also propose a simple strategy for reliable predictions using the identified backdoored models.
arXiv Detail & Related papers (2021-03-24T12:06:40Z) - Defending against Backdoors in Federated Learning with Robust Learning
Rate [25.74681620689152]
Federated learning (FL) allows a set of agents to collaboratively train a model without sharing their potentially sensitive data.
In a backdoor attack, an adversary tries to embed a backdoor functionality to the model during training that can later be activated to cause a desired misclassification.
We propose a lightweight defense that requires minimal change to the FL protocol.
arXiv Detail & Related papers (2020-07-07T23:38:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.