LoByITFL: Low Communication Secure and Private Federated Learning
- URL: http://arxiv.org/abs/2405.19217v1
- Date: Wed, 29 May 2024 16:00:19 GMT
- Title: LoByITFL: Low Communication Secure and Private Federated Learning
- Authors: Yue Xia, Christoph Hofmeister, Maximilian Egger, Rawad Bitar,
- Abstract summary: Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients.
We introduce LoByITFL, the first communication-efficient Information-Theoretic (IT) private and secure FL scheme.
- Score: 4.242342898338019
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) faces several challenges, such as the privacy of the clients data and security against Byzantine clients. Existing works treating privacy and security jointly make sacrifices on the privacy guarantee. In this work, we introduce LoByITFL, the first communication-efficient Information-Theoretic (IT) private and secure FL scheme that makes no sacrifices on the privacy guarantees while ensuring security against Byzantine adversaries. The key ingredients are a small and representative dataset available to the federator, a careful transformation of the FLTrust algorithm and the use of a trusted third party only in a one-time preprocessing phase before the start of the learning algorithm. We provide theoretical guarantees on privacy and Byzantine-resilience, and provide convergence guarantee and experimental results validating our theoretical findings.
Related papers
- Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - Byzantine-Resilient Secure Aggregation for Federated Learning Without Privacy Compromises [4.242342898338019]
Federated learning (FL) shows great promise in large scale machine learning, but brings new risks in terms of privacy and security.
We propose ByITFL, a novel scheme for FL that provides resilience against Byzantine users while keeping the users' data private from the federator and private from other users.
arXiv Detail & Related papers (2024-05-14T15:37:56Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - FewFedPIT: Towards Privacy-preserving and Few-shot Federated Instruction Tuning [54.26614091429253]
Federated instruction tuning (FedIT) is a promising solution, by consolidating collaborative training across multiple data owners.
FedIT encounters limitations such as scarcity of instructional data and risk of exposure to training data extraction attacks.
We propose FewFedPIT, designed to simultaneously enhance privacy protection and model performance of federated few-shot learning.
arXiv Detail & Related papers (2024-03-10T08:41:22Z) - TernaryVote: Differentially Private, Communication Efficient, and
Byzantine Resilient Distributed Optimization on Heterogeneous Data [50.797729676285876]
We propose TernaryVote, which combines a ternary compressor and the majority vote mechanism to realize differential privacy, gradient compression, and Byzantine resilience simultaneously.
We theoretically quantify the privacy guarantee through the lens of the emerging f-differential privacy (DP) and the Byzantine resilience of the proposed algorithm.
arXiv Detail & Related papers (2024-02-16T16:41:14Z) - PROFL: A Privacy-Preserving Federated Learning Method with Stringent
Defense Against Poisoning Attacks [2.6487166137163007]
Federated Learning (FL) faces two major issues: privacy leakage and poisoning attacks.
We propose a novel privacy-preserving Byzantine-robust FL framework PROFL.
PROFL is based on the two-trapdoor additional homomorphic encryption algorithm and blinding techniques.
arXiv Detail & Related papers (2023-12-02T06:34:37Z) - Byzantine-Robust Federated Learning with Variance Reduction and
Differential Privacy [6.343100139647636]
Federated learning (FL) is designed to preserve data privacy during model training.
FL is vulnerable to privacy attacks and Byzantine attacks.
We propose a new FL scheme that guarantees rigorous privacy and simultaneously enhances system robustness against Byzantine attacks.
arXiv Detail & Related papers (2023-09-07T01:39:02Z) - Active Membership Inference Attack under Local Differential Privacy in
Federated Learning [18.017082794703555]
Federated learning (FL) was originally regarded as a framework for collaborative learning among clients with data privacy protection.
We propose a new active membership inference (AMI) attack carried out by a dishonest server in FL.
arXiv Detail & Related papers (2023-02-24T15:21:39Z) - Byzantine-Robust Federated Learning with Optimal Statistical Rates and
Privacy Guarantees [123.0401978870009]
We propose Byzantine-robust federated learning protocols with nearly optimal statistical rates.
We benchmark against competing protocols and show the empirical superiority of the proposed protocols.
Our protocols with bucketing can be naturally combined with privacy-guaranteeing procedures to introduce security against a semi-honest server.
arXiv Detail & Related papers (2022-05-24T04:03:07Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.