Learning to Detect Malicious Clients for Robust Federated Learning
- URL: http://arxiv.org/abs/2002.00211v1
- Date: Sat, 1 Feb 2020 14:09:48 GMT
- Title: Learning to Detect Malicious Clients for Robust Federated Learning
- Authors: Suyi Li, Yong Cheng, Wei Wang, Yang Liu, Tianjian Chen
- Abstract summary: Federated learning systems are vulnerable to attacks from malicious clients.
We propose a new framework for robust federated learning where the central server learns to detect and remove the malicious model updates.
- Score: 20.5238037608738
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning systems are vulnerable to attacks from malicious clients.
As the central server in the system cannot govern the behaviors of the clients,
a rogue client may initiate an attack by sending malicious model updates to the
server, so as to degrade the learning performance or enforce targeted model
poisoning attacks (a.k.a. backdoor attacks). Therefore, timely detecting these
malicious model updates and the underlying attackers becomes critically
important. In this work, we propose a new framework for robust federated
learning where the central server learns to detect and remove the malicious
model updates using a powerful detection model, leading to targeted defense. We
evaluate our solution in both image classification and sentiment analysis tasks
with a variety of machine learning models. Experimental results show that our
solution ensures robust federated learning that is resilient to both the
Byzantine attacks and the targeted model poisoning attacks.
Related papers
- Genetic Algorithm-Based Dynamic Backdoor Attack on Federated
Learning-Based Network Traffic Classification [1.1887808102491482]
We propose GABAttack, a novel genetic algorithm-based backdoor attack against federated learning for network traffic classification.
This research serves as an alarming call for network security experts and practitioners to develop robust defense measures against such attacks.
arXiv Detail & Related papers (2023-09-27T14:02:02Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - FedDefender: Client-Side Attack-Tolerant Federated Learning [60.576073964874]
Federated learning enables learning from decentralized data sources without compromising privacy.
It is vulnerable to model poisoning attacks, where malicious clients interfere with the training process.
We propose a new defense mechanism that focuses on the client-side, called FedDefender, to help benign clients train robust local models.
arXiv Detail & Related papers (2023-07-18T08:00:41Z) - FedRecover: Recovering from Poisoning Attacks in Federated Learning
using Historical Information [67.8846134295194]
Federated learning is vulnerable to poisoning attacks in which malicious clients poison the global model.
We propose FedRecover, which can recover an accurate global model from poisoning attacks with small cost for the clients.
arXiv Detail & Related papers (2022-10-20T00:12:34Z) - FLCert: Provably Secure Federated Learning against Poisoning Attacks [67.8846134295194]
We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
arXiv Detail & Related papers (2022-10-02T17:50:04Z) - MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients [51.973224448076614]
We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
arXiv Detail & Related papers (2022-03-16T14:59:40Z) - TESSERACT: Gradient Flip Score to Secure Federated Learning Against
Model Poisoning Attacks [25.549815759093068]
Federated learning is vulnerable to model poisoning attacks.
This is because malicious clients can collude to make the global model inaccurate.
We develop TESSERACT, a defense against this directed deviation attack.
arXiv Detail & Related papers (2021-10-19T17:03:29Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.