BAGEL: Backdoor Attacks against Federated Contrastive Learning
- URL: http://arxiv.org/abs/2311.16113v1
- Date: Thu, 14 Sep 2023 13:05:31 GMT
- Title: BAGEL: Backdoor Attacks against Federated Contrastive Learning
- Authors: Yao Huang, Kongyang Chen, Jiannong Cao, Jiaxing Shen, Shaowei Wang, Yun Peng, Weilong Peng, Kechao Cai,
- Abstract summary: We study the backdoor attack against Federated Contrastive Learning (FCL) as a pioneer research.
malicious clients can successfully inject a backdoor into the global encoder by uploading poisoned local updates.
We also investigate how to inject backdoors into multiple downstream models, in terms of two different backdoor attacks.
- Score: 18.32846642838125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Contrastive Learning (FCL) is an emerging privacy-preserving paradigm in distributed learning for unlabeled data. In FCL, distributed parties collaboratively learn a global encoder with unlabeled data, and the global encoder could be widely used as a feature extractor to build models for many downstream tasks. However, FCL is also vulnerable to many security threats (e.g., backdoor attacks) due to its distributed nature, which are seldom investigated in existing solutions. In this paper, we study the backdoor attack against FCL as a pioneer research, to illustrate how backdoor attacks on distributed local clients act on downstream tasks. Specifically, in our system, malicious clients can successfully inject a backdoor into the global encoder by uploading poisoned local updates, thus downstream models built with this global encoder will also inherit the backdoor. We also investigate how to inject backdoors into multiple downstream models, in terms of two different backdoor attacks, namely the \textit{centralized attack} and the \textit{decentralized attack}. Experiment results show that both the centralized and the decentralized attacks can inject backdoors into downstream models effectively with high attack success rates. Finally, we evaluate two defense methods against our proposed backdoor attacks in FCL, which indicates that the decentralized backdoor attack is more stealthy and harder to defend.
Related papers
- Act in Collusion: A Persistent Distributed Multi-Target Backdoor in Federated Learning [5.91728247370845]
Federated learning is vulnerable to backdoor attacks due to its distributed nature.
We propose a more practical threat model for federated learning: the distributed multi-target backdoor.
We show that 30 rounds after the attack, Attack Success rates of three different backdoors from various clients remain above 93%.
arXiv Detail & Related papers (2024-11-06T13:57:53Z) - Mitigating Backdoor Attack by Injecting Proactive Defensive Backdoor [63.84477483795964]
Data-poisoning backdoor attacks are serious security threats to machine learning models.
In this paper, we focus on in-training backdoor defense, aiming to train a clean model even when the dataset may be potentially poisoned.
We propose a novel defense approach called PDB (Proactive Defensive Backdoor)
arXiv Detail & Related papers (2024-05-25T07:52:26Z) - EmInspector: Combating Backdoor Attacks in Federated Self-Supervised Learning Through Embedding Inspection [53.25863925815954]
Federated self-supervised learning (FSSL) has emerged as a promising paradigm that enables the exploitation of clients' vast amounts of unlabeled data.
While FSSL offers advantages, its susceptibility to backdoor attacks has not been investigated.
We propose the Embedding Inspector (EmInspector) that detects malicious clients by inspecting the embedding space of local models.
arXiv Detail & Related papers (2024-05-21T06:14:49Z) - Backdoor Federated Learning by Poisoning Backdoor-Critical Layers [32.510730902353224]
Federated learning (FL) has been widely deployed to enable machine learning training on sensitive data across distributed devices.
This paper proposes a general in-situ approach that identifies and verifies backdoor layers from the perspective of attackers.
Based on the identified BC layers, we carefully craft a new backdoor attack methodology that adaptively seeks a fundamental balance between attacking effects and stealthiness under various defense strategies.
arXiv Detail & Related papers (2023-08-08T05:46:47Z) - Backdoor Attack with Sparse and Invisible Trigger [57.41876708712008]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
backdoor attack is an emerging yet threatening training-phase threat.
We propose a sparse and invisible backdoor attack (SIBA)
arXiv Detail & Related papers (2023-05-11T10:05:57Z) - On the Vulnerability of Backdoor Defenses for Federated Learning [8.345632941376673]
Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.
In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning.
We propose a new federated backdoor attack method for possible countermeasures.
arXiv Detail & Related papers (2023-01-19T17:02:02Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Backdoors Stuck At The Frontdoor: Multi-Agent Backdoor Attacks That
Backfire [8.782809316491948]
We investigate a multi-agent backdoor attack scenario, where multiple attackers attempt to backdoor a victim model simultaneously.
A consistent backfiring phenomenon is observed across a wide range of games, where agents suffer from a low collective attack success rate.
The results motivate the re-evaluation of backdoor defense research for practical environments.
arXiv Detail & Related papers (2022-01-28T16:11:40Z) - Check Your Other Door! Establishing Backdoor Attacks in the Frequency
Domain [80.24811082454367]
We show the advantages of utilizing the frequency domain for establishing undetectable and powerful backdoor attacks.
We also show two possible defences that succeed against frequency-based backdoor attacks and possible ways for the attacker to bypass them.
arXiv Detail & Related papers (2021-09-12T12:44:52Z) - BAAAN: Backdoor Attacks Against Autoencoder and GAN-Based Machine
Learning Models [21.06679566096713]
We explore one of the most severe attacks against machine learning models, namely the backdoor attack, against both autoencoders and GANs.
The backdoor attack is a training time attack where the adversary implements a hidden backdoor in the target model that can only be activated by a secret trigger.
We extend the applicability of backdoor attacks to autoencoders and GAN-based models.
arXiv Detail & Related papers (2020-10-06T20:26:16Z) - Backdoor Learning: A Survey [75.59571756777342]
Backdoor attack intends to embed hidden backdoor into deep neural networks (DNNs)
Backdoor learning is an emerging and rapidly growing research area.
This paper presents the first comprehensive survey of this realm.
arXiv Detail & Related papers (2020-07-17T04:09:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.