Fluent: Round-efficient Secure Aggregation for Private Federated
Learning
- URL: http://arxiv.org/abs/2403.06143v1
- Date: Sun, 10 Mar 2024 09:11:57 GMT
- Title: Fluent: Round-efficient Secure Aggregation for Private Federated
Learning
- Authors: Xincheng Li, Jianting Ning, Geong Sen Poh, Leo Yu Zhang, Xinchun Yin,
Tianwei Zhang
- Abstract summary: Federated learning (FL) facilitates collaborative training of machine learning models among a large number of clients.
FL remains susceptible to vulnerabilities such as privacy inference and inversion attacks.
This work introduces Fluent, a round and communication-efficient secure aggregation scheme for private FL.
- Score: 23.899922716694427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) facilitates collaborative training of machine
learning models among a large number of clients while safeguarding the privacy
of their local datasets. However, FL remains susceptible to vulnerabilities
such as privacy inference and inversion attacks. Single-server secure
aggregation schemes were proposed to address these threats. Nonetheless, they
encounter practical constraints due to their round and communication
complexities. This work introduces Fluent, a round and communication-efficient
secure aggregation scheme for private FL. Fluent has several improvements
compared to state-of-the-art solutions like Bell et al. (CCS 2020) and Ma et
al. (SP 2023): (1) it eliminates frequent handshakes and secret sharing
operations by efficiently reusing the shares across multiple training
iterations without leaking any private information; (2) it accomplishes both
the consistency check and gradient unmasking in one logical step, thereby
reducing another round of communication. With these innovations, Fluent
achieves the fewest communication rounds (i.e., two in the collection phase) in
the malicious server setting, in contrast to at least three rounds in existing
schemes. This significantly minimizes the latency for geographically
distributed clients; (3) Fluent also introduces Fluent-Dynamic with a
participant selection algorithm and an alternative secret sharing scheme. This
can facilitate dynamic client joining and enhance the system flexibility and
scalability. We implemented Fluent and compared it with existing solutions.
Experimental results show that Fluent improves the computational cost by at
least 75% and communication overhead by at least 25% for normal clients. Fluent
also reduces the communication overhead for the server at the expense of a
marginal increase in computational cost.
Related papers
- ACCESS-FL: Agile Communication and Computation for Efficient Secure Aggregation in Stable Federated Learning Networks [26.002975401820887]
Federated Learning (FL) is a distributed learning framework designed for privacy-aware applications.
Traditional FL approaches risk exposing sensitive client data when plain model updates are transmitted to the server.
Google's Secure Aggregation (SecAgg) protocol addresses this threat by employing a double-masking technique.
We propose ACCESS-FL, a communication-and-computation-efficient secure aggregation method.
arXiv Detail & Related papers (2024-09-03T09:03:38Z) - Safely Learning with Private Data: A Federated Learning Framework for Large Language Model [3.1077263218029105]
Federated learning (FL) is an ideal solution for training models with distributed private data.
Traditional frameworks like FedAvg are unsuitable for large language models (LLM)
We propose FL-GLM, which prevents data leakage caused by both server-side and peer-client attacks.
arXiv Detail & Related papers (2024-06-21T06:43:15Z) - Enhancing Security and Privacy in Federated Learning using Update Digests and Voting-Based Defense [23.280147155814955]
Federated Learning (FL) is a promising privacy-preserving machine learning paradigm.
Despite its potential, FL faces challenges related to the trustworthiness of both clients and servers.
We introduce a novel framework named underlinetextbfFederated underlinetextbfLearning with underlinetextbfUpdate underlinetextbfDigest (FLUD)
FLUD addresses the critical issues of privacy preservation and resistance to Byzantine attacks within distributed learning environments.
arXiv Detail & Related papers (2024-05-29T06:46:10Z) - Boosting Communication Efficiency of Federated Learning's Secure Aggregation [22.943966056320424]
Federated Learning (FL) is a decentralized machine learning approach where client devices train models locally and send them to a server.
FL is vulnerable to model inversion attacks, where the server can infer sensitive client data from trained models.
Google's Secure Aggregation (SecAgg) protocol addresses this data privacy issue by masking each client's trained model.
This poster introduces a Communication-Efficient Secure Aggregation (CESA) protocol that substantially reduces this overhead.
arXiv Detail & Related papers (2024-05-02T10:00:16Z) - FedComLoc: Communication-Efficient Distributed Training of Sparse and Quantized Models [56.21666819468249]
Federated Learning (FL) has garnered increasing attention due to its unique characteristic of allowing heterogeneous clients to process their private data locally and interact with a central server.
We introduce FedComLoc, integrating practical and effective compression into emphScaffnew to further enhance communication efficiency.
arXiv Detail & Related papers (2024-03-14T22:29:59Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - On the Design of Communication-Efficient Federated Learning for Health
Monitoring [21.433739206682404]
We propose a communication-efficient federated learning (CEFL) framework that involves clients clustering and transfer learning.
CEFL can save up to 98.45% in communication costs while conceding less than 3% in accuracy loss, when compared to the conventional FL.
arXiv Detail & Related papers (2022-11-30T12:52:23Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.