Efficient Federated Learning with Enhanced Privacy via Lottery Ticket
Pruning in Edge Computing
- URL: http://arxiv.org/abs/2305.01387v1
- Date: Tue, 2 May 2023 13:02:09 GMT
- Title: Efficient Federated Learning with Enhanced Privacy via Lottery Ticket
Pruning in Edge Computing
- Authors: Yifan Shi, Kang Wei, Li Shen, Jun Li, Xueqian Wang, Bo Yuan, and Song
Guo
- Abstract summary: Federated learning (FL) is a collaborative learning paradigm for decentralized private data from mobile terminals (MTs)
Existing privacy-preserving methods usually adopt the instance-level differential privacy (DP)
We propose Fed-enhanced FL framework with underlinetextbfLottery underlinetextbfTicket underlinetextbfHypothesis (LTH) and zero-concentrated DunderlinetextbfP (zCDP)
- Score: 19.896989498650207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a collaborative learning paradigm for
decentralized private data from mobile terminals (MTs). However, it suffers
from issues in terms of communication, resource of MTs, and privacy. Existing
privacy-preserving FL methods usually adopt the instance-level differential
privacy (DP), which provides a rigorous privacy guarantee but with several
bottlenecks: severe performance degradation, transmission overhead, and
resource constraints of edge devices such as MTs. To overcome these drawbacks,
we propose Fed-LTP, an efficient and privacy-enhanced FL framework with
\underline{\textbf{L}}ottery \underline{\textbf{T}}icket
\underline{\textbf{H}}ypothesis (LTH) and zero-concentrated
D\underline{\textbf{P}} (zCDP). It generates a pruned global model on the
server side and conducts sparse-to-sparse training from scratch with zCDP on
the client side. On the server side, two pruning schemes are proposed: (i) the
weight-based pruning (LTH) determines the pruned global model structure; (ii)
the iterative pruning further shrinks the size of the pruned model's
parameters. Meanwhile, the performance of Fed-LTP is also boosted via model
validation based on the Laplace mechanism. On the client side, we use
sparse-to-sparse training to solve the resource-constraints issue and provide
tighter privacy analysis to reduce the privacy budget. We evaluate the
effectiveness of Fed-LTP on several real-world datasets in both independent and
identically distributed (IID) and non-IID settings. The results clearly confirm
the superiority of Fed-LTP over state-of-the-art (SOTA) methods in
communication, computation, and memory efficiencies while realizing a better
utility-privacy trade-off.
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Enhancing Security and Privacy in Federated Learning using Update Digests and Voting-Based Defense [23.280147155814955]
Federated Learning (FL) is a promising privacy-preserving machine learning paradigm.
Despite its potential, FL faces challenges related to the trustworthiness of both clients and servers.
We introduce a novel framework named underlinetextbfFederated underlinetextbfLearning with underlinetextbfUpdate underlinetextbfDigest (FLUD)
FLUD addresses the critical issues of privacy preservation and resistance to Byzantine attacks within distributed learning environments.
arXiv Detail & Related papers (2024-05-29T06:46:10Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients
via Secret Data Sharing [7.573516684862637]
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
This paper proposes a Dropout-Resilient Secure Federated Learning framework based on Lagrange computing.
We show that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets.
arXiv Detail & Related papers (2022-10-06T05:04:38Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.