Efficient Federated Learning with Enhanced Privacy via Lottery Ticket
Pruning in Edge Computing
- URL: http://arxiv.org/abs/2305.01387v1
- Date: Tue, 2 May 2023 13:02:09 GMT
- Title: Efficient Federated Learning with Enhanced Privacy via Lottery Ticket
Pruning in Edge Computing
- Authors: Yifan Shi, Kang Wei, Li Shen, Jun Li, Xueqian Wang, Bo Yuan, and Song
Guo
- Abstract summary: Federated learning (FL) is a collaborative learning paradigm for decentralized private data from mobile terminals (MTs)
Existing privacy-preserving methods usually adopt the instance-level differential privacy (DP)
We propose Fed-enhanced FL framework with underlinetextbfLottery underlinetextbfTicket underlinetextbfHypothesis (LTH) and zero-concentrated DunderlinetextbfP (zCDP)
- Score: 19.896989498650207
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a collaborative learning paradigm for
decentralized private data from mobile terminals (MTs). However, it suffers
from issues in terms of communication, resource of MTs, and privacy. Existing
privacy-preserving FL methods usually adopt the instance-level differential
privacy (DP), which provides a rigorous privacy guarantee but with several
bottlenecks: severe performance degradation, transmission overhead, and
resource constraints of edge devices such as MTs. To overcome these drawbacks,
we propose Fed-LTP, an efficient and privacy-enhanced FL framework with
\underline{\textbf{L}}ottery \underline{\textbf{T}}icket
\underline{\textbf{H}}ypothesis (LTH) and zero-concentrated
D\underline{\textbf{P}} (zCDP). It generates a pruned global model on the
server side and conducts sparse-to-sparse training from scratch with zCDP on
the client side. On the server side, two pruning schemes are proposed: (i) the
weight-based pruning (LTH) determines the pruned global model structure; (ii)
the iterative pruning further shrinks the size of the pruned model's
parameters. Meanwhile, the performance of Fed-LTP is also boosted via model
validation based on the Laplace mechanism. On the client side, we use
sparse-to-sparse training to solve the resource-constraints issue and provide
tighter privacy analysis to reduce the privacy budget. We evaluate the
effectiveness of Fed-LTP on several real-world datasets in both independent and
identically distributed (IID) and non-IID settings. The results clearly confirm
the superiority of Fed-LTP over state-of-the-art (SOTA) methods in
communication, computation, and memory efficiencies while realizing a better
utility-privacy trade-off.
Related papers
- Providing Differential Privacy for Federated Learning Over Wireless: A Cross-layer Framework [19.381425127772054]
Federated Learning (FL) is a distributed machine learning framework that inherently allows edge devices to maintain their local training data.
We propose a wireless physical layer (PHY) design for OTA-FL which improves differential privacy (DP) through a decentralized, dynamic power control.
This adaptation showcases the flexibility and effectiveness of our design across different learning algorithms while maintaining a strong emphasis on privacy.
arXiv Detail & Related papers (2024-12-05T18:27:09Z) - Federated PCA and Estimation for Spiked Covariance Matrices: Optimal Rates and Efficient Algorithm [19.673557166734977]
Federated Learning (FL) has gained significant recent attention in machine learning for its enhanced privacy and data security.
This paper investigates federated PCA and estimation for spiked covariance matrices under distributed differential privacy constraints.
We establish minimax rates of convergence, with a key finding that the central server's optimal rate is the harmonic mean of the local clients' minimax rates.
arXiv Detail & Related papers (2024-11-23T21:57:50Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - DReS-FL: Dropout-Resilient Secure Federated Learning for Non-IID Clients
via Secret Data Sharing [7.573516684862637]
Federated learning (FL) strives to enable collaborative training of machine learning models without centrally collecting clients' private data.
This paper proposes a Dropout-Resilient Secure Federated Learning framework based on Lagrange computing.
We show that DReS-FL is resilient to client dropouts and provides privacy protection for the local datasets.
arXiv Detail & Related papers (2022-10-06T05:04:38Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.