FLCert: Provably Secure Federated Learning against Poisoning Attacks
- URL: http://arxiv.org/abs/2210.00584v2
- Date: Tue, 4 Oct 2022 02:10:46 GMT
- Title: FLCert: Provably Secure Federated Learning against Poisoning Attacks
- Authors: Xiaoyu Cao, Zaixi Zhang, Jinyuan Jia, Neil Zhenqiang Gong
- Abstract summary: We propose FLCert, an ensemble federated learning framework that is provably secure against poisoning attacks.
Our experiments show that the label predicted by our FLCert for a test input is provably unaffected by a bounded number of malicious clients.
- Score: 67.8846134295194
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to its distributed nature, federated learning is vulnerable to poisoning
attacks, in which malicious clients poison the training process via
manipulating their local training data and/or local model updates sent to the
cloud server, such that the poisoned global model misclassifies many
indiscriminate test inputs or attacker-chosen ones. Existing defenses mainly
leverage Byzantine-robust federated learning methods or detect malicious
clients. However, these defenses do not have provable security guarantees
against poisoning attacks and may be vulnerable to more advanced attacks. In
this work, we aim to bridge the gap by proposing FLCert, an ensemble federated
learning framework, that is provably secure against poisoning attacks with a
bounded number of malicious clients. Our key idea is to divide the clients into
groups, learn a global model for each group of clients using any existing
federated learning method, and take a majority vote among the global models to
classify a test input. Specifically, we consider two methods to group the
clients and propose two variants of FLCert correspondingly, i.e., FLCert-P that
randomly samples clients in each group, and FLCert-D that divides clients to
disjoint groups deterministically. Our extensive experiments on multiple
datasets show that the label predicted by our FLCert for a test input is
provably unaffected by a bounded number of malicious clients, no matter what
poisoning attacks they use.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Honest Score Client Selection Scheme: Preventing Federated Learning Label Flipping Attacks in Non-IID Scenarios [27.36889020561564]
Federated Learning (FL) is a promising technology that enables multiple actors to build a joint model without sharing their raw data.
The distributed nature makes FL vulnerable to various poisoning attacks, including model poisoning attacks and data poisoning attacks.
In this paper, we focus on the most representative data poisoning attack - "label flipping attack" and monitor its effectiveness when attacking the existing FL methods.
arXiv Detail & Related papers (2023-11-10T02:07:41Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Mitigating Cross-client GANs-based Attack in Federated Learning [78.06700142712353]
Multi distributed multimedia clients can resort to federated learning (FL) to jointly learn a global shared model.
FL suffers from the cross-client generative adversarial networks (GANs)-based (C-GANs) attack.
We propose Fed-EDKD technique to improve the current popular FL schemes to resist C-GANs attack.
arXiv Detail & Related papers (2023-07-25T08:15:55Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - Characterizing Internal Evasion Attacks in Federated Learning [12.873984200814533]
Federated learning allows for clients to jointly train a machine learning model.
Clients' models are vulnerable to attacks during the training and testing phases.
In this paper, we address the issue of adversarial clients performing "internal evasion attacks"
arXiv Detail & Related papers (2022-09-17T21:46:38Z) - MPAF: Model Poisoning Attacks to Federated Learning based on Fake
Clients [51.973224448076614]
We propose the first Model Poisoning Attack based on Fake clients called MPAF.
MPAF can significantly decrease the test accuracy of the global model, even if classical defenses and norm clipping are adopted.
arXiv Detail & Related papers (2022-03-16T14:59:40Z) - RobustFed: A Truth Inference Approach for Robust Federated Learning [9.316565110931743]
Federated learning is a framework that enables clients to train a collaboratively global model under a central server's orchestration.
The aggregation step in federated learning is vulnerable to adversarial attacks as the central server cannot manage clients' behavior.
We propose a novel robust aggregation algorithm inspired by the truth inference methods in crowdsourcing.
arXiv Detail & Related papers (2021-07-18T09:34:57Z) - Provably Secure Federated Learning against Malicious Clients [31.85264586217373]
Malicious clients can corrupt the global model to predict incorrect labels for testing examples.
We show that our ensemble federated learning with any base federated learning algorithm is provably secure against malicious clients.
Our method can achieve a certified accuracy of 88% on MNIST when 20 out of 1,000 clients are malicious.
arXiv Detail & Related papers (2021-02-03T03:24:17Z) - Dynamic Defense Against Byzantine Poisoning Attacks in Federated
Learning [11.117880929232575]
Federated learning is vulnerable to Byzatine poisoning adversarial attacks.
We propose a dynamic aggregation operator that dynamically discards those adversarial clients.
The results show that the dynamic selection of the clients to aggregate enhances the performance of the global learning model.
arXiv Detail & Related papers (2020-07-29T18:02:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.