LightSecAgg: Rethinking Secure Aggregation in Federated Learning
- URL: http://arxiv.org/abs/2109.14236v1
- Date: Wed, 29 Sep 2021 07:19:27 GMT
- Title: LightSecAgg: Rethinking Secure Aggregation in Federated Learning
- Authors: Chien-Sheng Yang, Jinhyun So, Chaoyang He, Songze Li, Qian Yu, Salman
Avestimehr
- Abstract summary: We show that LightSecAgg achieves the same privacy and dropout-resiliency guarantees as the state-of-the-art protocols.
We also demonstrate that LightSecAgg significantly reduces the total training time, achieving a performance gain of up to $12.7times$ over baselines.
- Score: 24.834891926133594
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Secure model aggregation is a key component of federated learning (FL) that
aims at protecting the privacy of each user's individual model, while allowing
their global aggregation. It can be applied to any aggregation-based
approaches, including algorithms for training a global model, as well as
personalized FL frameworks. Model aggregation needs to also be resilient to
likely user dropouts in FL system, making its design substantially more
complex. State-of-the-art secure aggregation protocols essentially rely on
secret sharing of the random-seeds that are used for mask generations at the
users, in order to enable the reconstruction and cancellation of those
belonging to dropped users. The complexity of such approaches, however, grows
substantially with the number of dropped users. We propose a new approach,
named LightSecAgg, to overcome this bottleneck by turning the focus from
"random-seed reconstruction of the dropped users" to "one-shot aggregate-mask
reconstruction of the active users". More specifically, in LightSecAgg each
user protects its local model by generating a single random mask. This mask is
then encoded and shared to other users, in such a way that the aggregate-mask
of any sufficiently large set of active users can be reconstructed directly at
the server via encoded masks. We show that LightSecAgg achieves the same
privacy and dropout-resiliency guarantees as the state-of-the-art protocols,
while significantly reducing the overhead for resiliency to dropped users.
Furthermore, our system optimization helps to hide the runtime cost of offline
processing by parallelizing it with model training. We evaluate LightSecAgg via
extensive experiments for training diverse models on various datasets in a
realistic FL system, and demonstrate that LightSecAgg significantly reduces the
total training time, achieving a performance gain of up to $12.7\times$ over
baselines.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation [7.200910949076064]
Federated Learning (FL) enables multiple clients to collaboratively train a model without sharing their local data.
Yet the FL system is vulnerable to well-designed Byzantine attacks, which aim to disrupt the model training process by uploading malicious model updates.
We propose the Layer-Adaptive Sparsified Model Aggregation (LASA) approach, which combines pre-aggregation sparsification with layer-wise adaptive aggregation to improve robustness.
arXiv Detail & Related papers (2024-09-02T19:28:35Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Guaranteeing Data Privacy in Federated Unlearning with Dynamic User Participation [21.07328631033828]
Federated Unlearning (FU) can eliminate influences of Federated Learning (FL) users' data from trained global FL models.
A straightforward FU method involves removing the unlearned users and subsequently retraining a new global FL model from scratch with all remaining users.
We propose a privacy-preserving FU framework, aimed at ensuring privacy while effectively managing dynamic user participation.
arXiv Detail & Related papers (2024-06-03T03:39:07Z) - Scale-MIA: A Scalable Model Inversion Attack against Secure Federated
Learning via Latent Space Reconstruction [26.9559481641707]
Federated learning is known for its capability to safeguard participants' data privacy.
Recently emerged model inversion attacks (MIAs) have shown that a malicious parameter server can reconstruct individual users' local data samples through model updates.
We propose Scale-MIA, a novel MIA capable of efficiently and accurately recovering training samples of clients from the aggregated updates.
arXiv Detail & Related papers (2023-11-10T00:53:22Z) - Federated Learning Under Restricted User Availability [3.0846824529023387]
Non-uniform availability or participation of users is unavoidable due to an adverse or environment.
We propose a new formulation of the FL problem which effectively captures and mitigates limited participation of data originating from infrequent, or restricted users.
Our experiments on synthetic and benchmark datasets show that the proposed approach significantly improved performance as compared with standard FL.
arXiv Detail & Related papers (2023-09-25T14:40:27Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Robust Quantity-Aware Aggregation for Federated Learning [72.59915691824624]
Malicious clients can poison model updates and claim large quantities to amplify the impact of their model updates in the model aggregation.
Existing defense methods for FL, while all handling malicious model updates, either treat all quantities benign or simply ignore/truncate the quantities of all clients.
We propose a robust quantity-aware aggregation algorithm for federated learning, called FedRA, to perform the aggregation with awareness of local data quantities.
arXiv Detail & Related papers (2022-05-22T15:13:23Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - Multi-Center Federated Learning [62.32725938999433]
Federated learning (FL) can protect data privacy in distributed learning.
It merely collects local gradients from users without access to their data.
We propose a novel multi-center aggregation mechanism.
arXiv Detail & Related papers (2021-08-19T12:20:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.