ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment
- URL: http://arxiv.org/abs/2302.09913v3
- Date: Fri, 2 Jun 2023 10:06:49 GMT
- Title: ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment
- Authors: Tayyebeh Jahani-Nezhad and Mohammad Ali Maddah-Ali and Giuseppe Caire
- Abstract summary: ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
- Score: 90.60126724503662
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose ByzSecAgg, an efficient secure aggregation scheme
for federated learning that is protected against Byzantine attacks and privacy
leakages. Processing individual updates to manage adversarial behavior, while
preserving privacy of data against colluding nodes, requires some sort of
secure secret sharing. However, the communication load for secret sharing of
long vectors of updates can be very high. ByzSecAgg solves this problem by
partitioning local updates into smaller sub-vectors and sharing them using ramp
secret sharing. However, this sharing method does not admit bi-linear
computations, such as pairwise distance calculations, needed by
outlier-detection algorithms. To overcome this issue, each user runs another
round of ramp sharing, with different embedding of data in the sharing
polynomial. This technique, motivated by ideas from coded computing, enables
secure computation of pairwise distance. In addition, to maintain the integrity
and privacy of the local update, ByzSecAgg also uses a vector commitment
method, in which the commitment size remains constant (i.e. does not increase
with the length of the local update), while simultaneously allowing
verification of the secret sharing process. In terms of communication loads,
ByzSecAgg significantly outperforms the state-of-the-art scheme, known as BREA.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation [2.2667044928324747]
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data.
Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks.
We propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol.
arXiv Detail & Related papers (2024-05-24T03:31:10Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Mining Relations among Cross-Frame Affinities for Video Semantic
Segmentation [87.4854250338374]
We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations.
Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods.
arXiv Detail & Related papers (2022-07-21T12:12:36Z) - Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD [29.6870062491741]
Hogwilld! is a go-to approach to parallelize SGD over a shared-memory setting.
In this paper, we propose incorporating decentralized distributed memory with each node running parallel shared-memory SGD itself.
arXiv Detail & Related papers (2022-03-13T11:52:24Z) - Secure Byzantine-Robust Distributed Learning via Clustering [16.85310886805588]
Federated learning systems that jointly preserve Byzantine robustness and privacy have remained an open problem.
We propose SHARE, a distributed learning framework designed to cryptographically preserve client update privacy and robustness to Byzantine adversaries simultaneously.
arXiv Detail & Related papers (2021-10-06T17:40:26Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated
Learning [18.237186837994585]
A'secure aggregation' protocol enables the server to aggregate clients' models in a privacy-preserving manner.
FastSecAgg is efficient in terms of computation and communication, and robust to client dropouts.
arXiv Detail & Related papers (2020-09-23T16:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.