Communication and Energy Efficient Wireless Federated Learning with
Intrinsic Privacy
- URL: http://arxiv.org/abs/2304.07460v2
- Date: Sat, 9 Dec 2023 02:16:13 GMT
- Title: Communication and Energy Efficient Wireless Federated Learning with
Intrinsic Privacy
- Authors: Zhenxiao Zhang and Yuanxiong Guo and Yuguang Fang and Yanmin Gong
- Abstract summary: Federated Learning (FL) is a collaborative learning framework that enables edge devices to collaboratively learn a global model while keeping raw data locally.
We propose a novel wireless FL scheme called private edge learning with spars (PFELS) to provide client-level DP guarantee with intrinsic channel noise.
- Score: 16.305837225117603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a collaborative learning framework that enables
edge devices to collaboratively learn a global model while keeping raw data
locally. Although FL avoids leaking direct information from local datasets,
sensitive information can still be inferred from the shared models. To address
the privacy issue in FL, differential privacy (DP) mechanisms are leveraged to
provide formal privacy guarantee. However, when deploying FL at the wireless
edge with over-the-air computation, ensuring client-level DP faces significant
challenges. In this paper, we propose a novel wireless FL scheme called private
federated edge learning with sparsification (PFELS) to provide client-level DP
guarantee with intrinsic channel noise while reducing communication and energy
overhead and improving model accuracy. The key idea of PFELS is for each device
to first compress its model update and then adaptively design the transmit
power of the compressed model update according to the wireless channel status
without any artificial noise addition. We provide a privacy analysis for PFELS
and prove the convergence of PFELS under general non-convex and non-IID
settings. Experimental results show that compared with prior work, PFELS can
improve the accuracy with the same DP guarantee and save communication and
energy costs simultaneously.
Related papers
- Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing [5.667290129954206]
Federated Learning (FL) is essential for efficient data exchange in Internet of Things (IoT) environments.
We introduce Federated HyperDimensional computing with Privacy-preserving (FedHDPrivacy)
FedHDPrivacy carefully manages the balance between privacy and performance by theoretically tracking cumulative noise from previous rounds.
arXiv Detail & Related papers (2024-11-02T05:00:44Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Differentially Private Over-the-Air Federated Learning Over MIMO Fading
Channels [24.534729104570417]
Federated learning (FL) enables edge devices to collaboratively train machine learning models.
While over-the-air model aggregation improves communication efficiency, uploading models to an edge server over wireless networks can pose privacy risks.
We show that FL model communication with a multiple-antenna server amplifies privacy leakage.
arXiv Detail & Related papers (2023-06-19T14:44:34Z) - Differentially Private Wireless Federated Learning Using Orthogonal
Sequences [56.52483669820023]
We propose a privacy-preserving uplink over-the-air computation (AirComp) method, termed FLORAS.
We prove that FLORAS offers both item-level and client-level differential privacy guarantees.
A new FL convergence bound is derived which, combined with the privacy guarantees, allows for a smooth tradeoff between the achieved convergence rate and differential privacy levels.
arXiv Detail & Related papers (2023-06-14T06:35:10Z) - PS-FedGAN: An Efficient Federated Learning Framework Based on Partially
Shared Generative Adversarial Networks For Data Privacy [56.347786940414935]
Federated Learning (FL) has emerged as an effective learning paradigm for distributed computation.
This work proposes a novel FL framework that requires only partial GAN model sharing.
Named as PS-FedGAN, this new framework enhances the GAN releasing and training mechanism to address heterogeneous data distributions.
arXiv Detail & Related papers (2023-05-19T05:39:40Z) - Federated Learning with Sparsified Model Perturbation: Improving
Accuracy under Client-Level Differential Privacy [27.243322019117144]
Federated learning (FL) enables distributed clients to collaboratively learn a shared statistical model.
sensitive information about the training data can still be inferred from model updates shared in FL.
Differential privacy (DP) is the state-of-the-art technique to defend against those attacks.
This paper develops a novel FL scheme named Fed-SMP that provides client-level DP guarantee while maintaining high model accuracy.
arXiv Detail & Related papers (2022-02-15T04:05:42Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Federated Learning with Sparsification-Amplified Privacy and Adaptive
Optimization [27.243322019117144]
Federated learning (FL) enables distributed agents to collaboratively learn a centralized model without sharing their raw data with each other.
We propose a new FL framework with sparsification-amplified privacy.
Our approach integrates random sparsification with gradient perturbation on each agent to amplify privacy guarantee.
arXiv Detail & Related papers (2020-08-01T20:22:57Z) - Harnessing Wireless Channels for Scalable and Privacy-Preserving
Federated Learning [56.94644428312295]
Wireless connectivity is instrumental in enabling federated learning (FL)
Channel randomnessperturbs each worker inversions model update while multiple workers updates incur significant interference on bandwidth.
In A-FADMM, all workers upload their model updates to the parameter server using a single channel via analog transmissions.
This not only saves communication bandwidth, but also hides each worker's exact model update trajectory from any eavesdropper.
arXiv Detail & Related papers (2020-07-03T16:31:15Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.