Learning to Backdoor Federated Learning
- URL: http://arxiv.org/abs/2303.03320v3
- Date: Sun, 28 May 2023 13:57:20 GMT
- Title: Learning to Backdoor Federated Learning
- Authors: Henger Li, Chen Wu, Sencun Zhu, Zizhan Zheng
- Abstract summary: In a federated learning (FL) system, malicious participants can easily embed backdoors into the aggregated model.
We propose a general reinforcement learning-based backdoor attack framework.
Our framework is both adaptive and flexible and achieves strong attack performance and durability even under state-of-the-art defenses.
- Score: 9.046972927978997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a federated learning (FL) system, malicious participants can easily embed
backdoors into the aggregated model while maintaining the model's performance
on the main task. To this end, various defenses, including training stage
aggregation-based defenses and post-training mitigation defenses, have been
proposed recently. While these defenses obtain reasonable performance against
existing backdoor attacks, which are mainly heuristics based, we show that they
are insufficient in the face of more advanced attacks. In particular, we
propose a general reinforcement learning-based backdoor attack framework where
the attacker first trains a (non-myopic) attack policy using a simulator built
upon its local data and common knowledge on the FL system, which is then
applied during actual FL training. Our attack framework is both adaptive and
flexible and achieves strong attack performance and durability even under
state-of-the-art defenses.
Related papers
- Security Assessment of Hierarchical Federated Deep Learning [0.0]
Hierarchical federated learning (HFL) is a promising distributed deep learning model training paradigm, but it has crucial security concerns arising from adversarial attacks.
This research investigates and assesses the security of HFL using a novel methodology by focusing on its resilience against adversarial attacks inference-time and training-time.
arXiv Detail & Related papers (2024-08-20T11:34:23Z) - Non-Cooperative Backdoor Attacks in Federated Learning: A New Threat Landscape [7.00762739959285]
Federated Learning (FL) for privacy-preserving model training remains susceptible to backdoor attacks.
This research emphasizes the critical need for robust defenses against diverse backdoor attacks in the evolving FL landscape.
arXiv Detail & Related papers (2024-07-05T22:03:13Z) - Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning [49.242828934501986]
Multimodal contrastive learning has emerged as a powerful paradigm for building high-quality features.
backdoor attacks subtly embed malicious behaviors within the model during training.
We introduce an innovative token-based localized forgetting training regime.
arXiv Detail & Related papers (2024-03-24T18:33:15Z) - On the Difficulty of Defending Contrastive Learning against Backdoor
Attacks [58.824074124014224]
We show how contrastive backdoor attacks operate through distinctive mechanisms.
Our findings highlight the need for defenses tailored to the specificities of contrastive backdoor attacks.
arXiv Detail & Related papers (2023-12-14T15:54:52Z) - BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive
Learning [85.2564206440109]
This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses.
We introduce the emphtoolns attack, which is resistant to backdoor detection and model fine-tuning defenses.
arXiv Detail & Related papers (2023-11-20T02:21:49Z) - Adversary Aware Continual Learning [3.3439097577935213]
Adversary can introduce small amount of misinformation to the model to cause deliberate forgetting of a specific task or class at test time.
We use the attacker's primary strength-hiding the backdoor pattern by making it imperceptible to humans-against it, and propose to learn a perceptible (stronger) pattern that can overpower the attacker's imperceptible pattern.
We show that our proposed defensive framework considerably improves the performance of class incremental learning algorithms with no knowledge of the attacker's target task, attacker's target class, and attacker's imperceptible pattern.
arXiv Detail & Related papers (2023-04-27T19:49:50Z) - Towards a Defense against Backdoor Attacks in Continual Federated
Learning [26.536009090970257]
We propose a novel framework for defending against backdoor attacks in the federated continual learning setting.
Our framework trains two models in parallel: a backbone model and a shadow model.
We show experimentally that our framework significantly improves upon existing defenses against backdoor attacks.
arXiv Detail & Related papers (2022-05-24T03:04:21Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Meta Federated Learning [57.52103907134841]
Federated Learning (FL) is vulnerable to training time adversarial attacks.
We propose Meta Federated Learning ( Meta-FL) which not only is compatible with secure aggregation protocol but also facilitates defense against backdoor attacks.
arXiv Detail & Related papers (2021-02-10T16:48:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.