Backdoor Attacks on Federated Meta-Learning
- URL: http://arxiv.org/abs/2006.07026v2
- Date: Wed, 16 Dec 2020 16:15:58 GMT
- Title: Backdoor Attacks on Federated Meta-Learning
- Authors: Chien-Lun Chen, Leana Golubchik, Marco Paolieri
- Abstract summary: We analyze the effects of backdoor attacks on federated meta-learning.
We propose a defense mechanism inspired by matching networks, where the class of an input is predicted from the similarity of its features.
- Score: 0.225596179391365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning allows multiple users to collaboratively train a shared
classification model while preserving data privacy. This approach, where model
updates are aggregated by a central server, was shown to be vulnerable to
poisoning backdoor attacks: a malicious user can alter the shared model to
arbitrarily classify specific inputs from a given class. In this paper, we
analyze the effects of backdoor attacks on federated meta-learning, where users
train a model that can be adapted to different sets of output classes using
only a few examples. While the ability to adapt could, in principle, make
federated learning frameworks more robust to backdoor attacks (when new
training examples are benign), we find that even 1-shot~attacks can be very
successful and persist after additional training. To address these
vulnerabilities, we propose a defense mechanism inspired by matching networks,
where the class of an input is predicted from the similarity of its features
with a support set of labeled examples. By removing the decision logic from the
model shared with the federation, success and persistence of backdoor attacks
are greatly reduced.
Related papers
- Model Pairing Using Embedding Translation for Backdoor Attack Detection on Open-Set Classification Tasks [63.269788236474234]
We propose to use model pairs on open-set classification tasks for detecting backdoors.
We show that this score, can be an indicator for the presence of a backdoor despite models being of different architectures.
This technique allows for the detection of backdoors on models designed for open-set classification tasks, which is little studied in the literature.
arXiv Detail & Related papers (2024-02-28T21:29:16Z) - Protect Federated Learning Against Backdoor Attacks via Data-Free
Trigger Generation [25.072791779134]
Federated Learning (FL) enables large-scale clients to collaboratively train a model without sharing their raw data.
Due to the lack of data auditing for untrusted clients, FL is vulnerable to poisoning attacks, especially backdoor attacks.
We propose a novel data-free trigger-generation-based defense approach based on the two characteristics of backdoor attacks.
arXiv Detail & Related papers (2023-08-22T10:16:12Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - CleanCLIP: Mitigating Data Poisoning Attacks in Multimodal Contrastive
Learning [63.72975421109622]
CleanCLIP is a finetuning framework that weakens the learned spurious associations introduced by backdoor attacks.
CleanCLIP maintains model performance on benign examples while erasing a range of backdoor attacks on multimodal contrastive learning.
arXiv Detail & Related papers (2023-03-06T17:48:32Z) - On Feasibility of Server-side Backdoor Attacks on Split Learning [5.559334420715782]
Split learning is a collaborative learning design that allows several participants (clients) to train a shared model while keeping their datasets private.
Recent studies demonstrate that collaborative learning models are vulnerable to security and privacy attacks such as model inference and backdoor attacks.
This paper performs a novel backdoor attack on split learning and studies its effectiveness.
arXiv Detail & Related papers (2023-02-19T14:06:08Z) - On the Vulnerability of Backdoor Defenses for Federated Learning [8.345632941376673]
Federated Learning (FL) is a popular distributed machine learning paradigm that enables jointly training a global model without sharing clients' data.
In this paper, we study whether the current defense mechanisms truly neutralize the backdoor threats from federated learning.
We propose a new federated backdoor attack method for possible countermeasures.
arXiv Detail & Related papers (2023-01-19T17:02:02Z) - Marksman Backdoor: Backdoor Attacks with Arbitrary Target Class [17.391987602738606]
In recent years, machine learning models have been shown to be vulnerable to backdoor attacks.
This paper exploits a novel backdoor attack with a much more powerful payload, denoted as Marksman.
We show empirically that the proposed framework achieves high attack performance while preserving the clean-data performance in several benchmark datasets.
arXiv Detail & Related papers (2022-10-17T15:46:57Z) - On the Effectiveness of Adversarial Training against Backdoor Attacks [111.8963365326168]
A backdoored model always predicts a target class in the presence of a predefined trigger pattern.
In general, adversarial training is believed to defend against backdoor attacks.
We propose a hybrid strategy which provides satisfactory robustness across different backdoor attacks.
arXiv Detail & Related papers (2022-02-22T02:24:46Z) - Backdoor Attacks on Federated Learning with Lottery Ticket Hypothesis [49.38856542573576]
Edge devices in federated learning usually have much more limited computation and communication resources compared to servers in a data center.
In this work, we empirically demonstrate that Lottery Ticket models are equally vulnerable to backdoor attacks as the original dense models.
arXiv Detail & Related papers (2021-09-22T04:19:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.