ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated Learning Based on Coded Computing and Vector Commitment
- URL: http://arxiv.org/abs/2302.09913v4
- Date: Fri, 06 Jun 2025 16:07:58 GMT
- Title: ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated Learning Based on Coded Computing and Vector Commitment
- Authors: Tayyebeh Jahani-Nezhad, Mohammad Ali Maddah-Ali, Giuseppe Caire,
- Abstract summary: ByzSecAgg is an efficient secure aggregation scheme for federated learning.<n>ByzSecAgg is resistant to Byzantine attacks and privacy leakages.
- Score: 61.540831911168226
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose ByzSecAgg, an efficient secure aggregation scheme for federated learning that is resistant to Byzantine attacks and privacy leakages. Processing individual updates to manage adversarial behavior, while preserving the privacy of the data against colluding nodes, requires some sort of secure secret sharing. However, the communication load for secret sharing of long vectors of updates can be very high. In federated settings, where users are often edge devices with potential bandwidth constraints, excessive communication overhead is undesirable. ByzSecAgg solves this problem by partitioning local updates into smaller sub-vectors and sharing them using ramp secret sharing. However, this sharing method does not admit bilinear computations, such as pairwise distances calculations, which are needed for distance-based outlier-detection algorithms, and effective methods for mitigating Byzantine attacks. To overcome this issue, each user runs another round of ramp sharing, with a different embedding of the data in the sharing polynomial. This technique, motivated by ideas from coded computing, enables secure computation of pairwise distance. In addition, to maintain the integrity and privacy of the local update, ByzSecAgg also uses a vector commitment method, in which the commitment size remains constant (i.e., does not increase with the length of the local update), while simultaneously allowing verification of the secret sharing process. In terms of communication load, ByzSecAgg significantly outperforms the related baseline scheme, known as BREA.
Related papers
- PREAMBLE: Private and Efficient Aggregation of Block Sparse Vectors and Applications [42.968231105076335]
We revisit the problem of secure aggregation of high-dimensional vectors in a two-server system such as Prio.
PREAMBLE is a novel extension of distributed point functions that enables communication- and computation-efficient aggregation.
arXiv Detail & Related papers (2025-03-14T21:58:15Z) - Fundamental Limits of Hierarchical Secure Aggregation with Cyclic User Association [93.46811590752814]
Hierarchical secure aggregation is motivated by federated learning.
In this paper, we consider HSA with a cyclic association pattern where each user is connected to $B$ consecutive relays.
We propose an efficient aggregation scheme which includes a message design for the inputs inspired by gradient coding.
arXiv Detail & Related papers (2025-03-06T15:53:37Z) - NET-SA: An Efficient Secure Aggregation Architecture Based on In-Network Computing [10.150846654917753]
NET-SA is an efficient secure aggregation architecture for machine learning.
It reduces communication overhead due to eliminating the communication-intensive phases of seed agreement and secret sharing.
It achieves up to 77x and 12x enhancements in runtime and 2x decrease in total client communication cost compared with state-of-the-art methods.
arXiv Detail & Related papers (2025-01-02T10:27:06Z) - ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Privacy Preserving Semi-Decentralized Mean Estimation over Intermittently-Connected Networks [59.43433767253956]
We consider the problem of privately estimating the mean of vectors distributed across different nodes of an unreliable wireless network.
In a semi-decentralized setup, nodes can collaborate with their neighbors to compute a local consensus, which they relay to a central server.
We study the tradeoff between collaborative relaying and privacy leakage due to the data sharing among nodes.
arXiv Detail & Related papers (2024-06-06T06:12:15Z) - RFLPA: A Robust Federated Learning Framework against Poisoning Attacks with Secure Aggregation [2.2667044928324747]
Federated learning (FL) allows multiple devices to train a model collaboratively without sharing their data.
Despite its benefits, FL is vulnerable to privacy leakage and poisoning attacks.
We propose a robust federated learning framework against poisoning attacks (RFLPA) based on SecAgg protocol.
arXiv Detail & Related papers (2024-05-24T03:31:10Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - On Differential Privacy for Federated Learning in Wireless Systems with
Multiple Base Stations [90.53293906751747]
We consider a federated learning model in a wireless system with multiple base stations and inter-cell interference.
We show the convergence behavior of the learning process by deriving an upper bound on its optimality gap.
Our proposed scheduler improves the average accuracy of the predictions compared with a random scheduler.
arXiv Detail & Related papers (2022-08-25T03:37:11Z) - Mining Relations among Cross-Frame Affinities for Video Semantic
Segmentation [87.4854250338374]
We explore relations among affinities in two aspects: single-scale intrinsic correlations and multi-scale relations.
Our experiments demonstrate that the proposed method performs favorably against state-of-the-art VSS methods.
arXiv Detail & Related papers (2022-07-21T12:12:36Z) - Scaling the Wild: Decentralizing Hogwild!-style Shared-memory SGD [29.6870062491741]
Hogwilld! is a go-to approach to parallelize SGD over a shared-memory setting.
In this paper, we propose incorporating decentralized distributed memory with each node running parallel shared-memory SGD itself.
arXiv Detail & Related papers (2022-03-13T11:52:24Z) - Secure Byzantine-Robust Distributed Learning via Clustering [16.85310886805588]
Federated learning systems that jointly preserve Byzantine robustness and privacy have remained an open problem.
We propose SHARE, a distributed learning framework designed to cryptographically preserve client update privacy and robustness to Byzantine adversaries simultaneously.
arXiv Detail & Related papers (2021-10-06T17:40:26Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated
Learning [18.237186837994585]
A'secure aggregation' protocol enables the server to aggregate clients' models in a privacy-preserving manner.
FastSecAgg is efficient in terms of computation and communication, and robust to client dropouts.
arXiv Detail & Related papers (2020-09-23T16:49:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.