FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated
Learning
- URL: http://arxiv.org/abs/2009.11248v1
- Date: Wed, 23 Sep 2020 16:49:02 GMT
- Title: FastSecAgg: Scalable Secure Aggregation for Privacy-Preserving Federated
Learning
- Authors: Swanand Kadhe, Nived Rajaraman, O. Ozan Koyluoglu, Kannan Ramchandran
- Abstract summary: A'secure aggregation' protocol enables the server to aggregate clients' models in a privacy-preserving manner.
FastSecAgg is efficient in terms of computation and communication, and robust to client dropouts.
- Score: 18.237186837994585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent attacks on federated learning demonstrate that keeping the training
data on clients' devices does not provide sufficient privacy, as the model
parameters shared by clients can leak information about their training data. A
'secure aggregation' protocol enables the server to aggregate clients' models
in a privacy-preserving manner. However, existing secure aggregation protocols
incur high computation/communication costs, especially when the number of model
parameters is larger than the number of clients participating in an iteration
-- a typical scenario in federated learning.
In this paper, we propose a secure aggregation protocol, FastSecAgg, that is
efficient in terms of computation and communication, and robust to client
dropouts. The main building block of FastSecAgg is a novel multi-secret sharing
scheme, FastShare, based on the Fast Fourier Transform (FFT), which may be of
independent interest. FastShare is information-theoretically secure, and
achieves a trade-off between the number of secrets, privacy threshold, and
dropout tolerance. Riding on the capabilities of FastShare, we prove that
FastSecAgg is (i) secure against the server colluding with 'any' subset of some
constant fraction (e.g. $\sim10\%$) of the clients in the honest-but-curious
setting; and (ii) tolerates dropouts of a 'random' subset of some constant
fraction (e.g. $\sim10\%$) of the clients. FastSecAgg achieves significantly
smaller computation cost than existing schemes while achieving the same
(orderwise) communication cost. In addition, it guarantees security against
adaptive adversaries, which can perform client corruptions dynamically during
the execution of the protocol.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Boosting Communication Efficiency of Federated Learning's Secure Aggregation [22.943966056320424]
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server.
FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model.
This poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead.
arXiv Detail & Related papers (2024-05-02T10:00:16Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Chu-ko-nu: A Reliable, Efficient, and Anonymously Authentication-Enabled Realization for Multi-Round Secure Aggregation in Federated Learning [13.64339376830805]
We propose a more reliable and anonymously authenticated scheme called Chu-ko-nu for secure aggregation.
Chu-ko-nu breaks the probability P barrier by supplementing a redistribution process of secret key components.
It can support clients anonymously participating in FL training and enables the server to authenticate clients effectively in the presence of attacks.
arXiv Detail & Related papers (2024-02-23T05:50:43Z) - Robust and Actively Secure Serverless Collaborative Learning [48.01929996757643]
Collaborative machine learning (ML) is widely used to enable institutions to learn better models from distributed data.
While collaborative approaches to learning intuitively protect user data, they remain vulnerable to either the server, the clients, or both.
We propose a peer-to-peer (P2P) learning scheme that is secure against malicious servers and robust to malicious clients.
arXiv Detail & Related papers (2023-10-25T14:43:03Z) - An Efficient and Multi-private Key Secure Aggregation for Federated Learning [41.29971745967693]
We propose an efficient and multi-private key secure aggregation scheme for federated learning.
Specifically, we skillfully modify the variant ElGamal encryption technique to achieve homomorphic addition operation.
For the high dimensional deep model parameter, we introduce a super-increasing sequence to compress multi-dimensional data into 1-D.
arXiv Detail & Related papers (2023-06-15T09:05:36Z) - Client-specific Property Inference against Secure Aggregation in
Federated Learning [52.8564467292226]
Federated learning has become a widely used paradigm for collaboratively training a common model among different participants.
Many attacks have shown that it is still possible to infer sensitive information such as membership, property, or outright reconstruction of participant data.
We show that simple linear models can effectively capture client-specific properties only from the aggregated model updates.
arXiv Detail & Related papers (2023-03-07T14:11:01Z) - ByzSecAgg: A Byzantine-Resistant Secure Aggregation Scheme for Federated
Learning Based on Coded Computing and Vector Commitment [90.60126724503662]
ByzSecAgg is an efficient secure aggregation scheme for federated learning.
ByzSecAgg is protected against Byzantine attacks and privacy leakages.
arXiv Detail & Related papers (2023-02-20T11:15:18Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Collusion Resistant Federated Learning with Oblivious Distributed
Differential Privacy [4.951247283741297]
Privacy-preserving federated learning enables a population of distributed clients to jointly learn a shared model.
We present an efficient mechanism based on oblivious distributed differential privacy that is the first to protect against such client collusion.
We conclude with empirical analysis of the protocol's execution speed, learning accuracy, and privacy performance on two data sets.
arXiv Detail & Related papers (2022-02-20T19:52:53Z) - Towards Bidirectional Protection in Federated Learning [70.36925233356335]
F2ED-LEARNING offers bidirectional defense against malicious centralized server and Byzantine malicious clients.
F2ED-LEARNING securely aggregates each shard's update and launches FilterL2 on updates from different shards.
evaluation shows that F2ED-LEARNING consistently achieves optimal or close-to-optimal performance.
arXiv Detail & Related papers (2020-10-02T19:37:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.