Joint Privacy Enhancement and Quantization in Federated Learning
- URL: http://arxiv.org/abs/2208.10888v1
- Date: Tue, 23 Aug 2022 11:42:58 GMT
- Title: Joint Privacy Enhancement and Quantization in Federated Learning
- Authors: Natalie Lang, Elad Sofer, Tomer Shaked, and Nir Shlezinger
- Abstract summary: Federated learning (FL) is an emerging paradigm for training machine learning models using possibly private data available at edge devices.
We propose a method coined joint privacy enhancement and quantization (JoPEQ)
We show that JoPEQ simultaneously quantizes data according to a required bit-rate while holding a desired privacy level.
- Score: 23.36363480217293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging paradigm for training machine learning
models using possibly private data available at edge devices. The distributed
operation of FL gives rise to challenges that are not encountered in
centralized machine learning, including the need to preserve the privacy of the
local datasets, and the communication load due to the repeated exchange of
updated models. These challenges are often tackled individually via techniques
that induce some distortion on the updated models, e.g., local differential
privacy (LDP) mechanisms and lossy compression. In this work we propose a
method coined joint privacy enhancement and quantization (JoPEQ), which jointly
implements lossy compression and privacy enhancement in FL settings. In
particular, JoPEQ utilizes vector quantization based on random lattice, a
universal compression technique whose byproduct distortion is statistically
equivalent to additive noise. This distortion is leveraged to enhance privacy
by augmenting the model updates with dedicated multivariate privacy preserving
noise. We show that JoPEQ simultaneously quantizes data according to a required
bit-rate while holding a desired privacy level, without notably affecting the
utility of the learned model. This is shown via analytical LDP guarantees,
distortion and convergence bounds derivation, and numerical studies. Finally,
we empirically assert that JoPEQ demolishes common attacks known to exploit
privacy leakage.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Privacy-Preserving Federated Learning with Differentially Private Hyperdimensional Computing [5.667290129954206]
Federated Learning (FL) is essential for efficient data exchange in Internet of Things (IoT) environments.
We introduce Federated HyperDimensional computing with Privacy-preserving (FedHDPrivacy)
FedHDPrivacy carefully manages the balance between privacy and performance by theoretically tracking cumulative noise from previous rounds.
arXiv Detail & Related papers (2024-11-02T05:00:44Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - The Effect of Quantization in Federated Learning: A Rényi Differential Privacy Perspective [15.349042342071439]
Federated Learning (FL) is an emerging paradigm that holds great promise for privacy-preserving machine learning using distributed data.
To enhance privacy, FL can be combined with Differential Privacy (DP), which involves adding Gaussian noise to the model weights.
This research paper investigates the impact of quantization on privacy in FL systems.
arXiv Detail & Related papers (2024-05-16T13:50:46Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution [1.565361244756411]
Federated learning (FL) is a framework which allows multiple users to jointly train a global machine learning (ML) model.
One key motivation of such distributed frameworks is to provide privacy guarantees to the users.
We present a novel quantization method, utilizing a mixed geometric distribution to introduce the randomness needed to provide DP.
arXiv Detail & Related papers (2023-12-10T04:44:53Z) - Amplitude-Varying Perturbation for Balancing Privacy and Utility in
Federated Learning [86.08285033925597]
This paper presents a new DP perturbation mechanism with a time-varying noise amplitude to protect the privacy of federated learning.
We derive an online refinement of the series to prevent FL from premature convergence resulting from excessive perturbation noise.
The contribution of the new DP mechanism to the convergence and accuracy of privacy-preserving FL is corroborated, compared to the state-of-the-art Gaussian noise mechanism with a persistent noise amplitude.
arXiv Detail & Related papers (2023-03-07T22:52:40Z) - Over-the-Air Federated Learning with Privacy Protection via Correlated
Additive Perturbations [57.20885629270732]
We consider privacy aspects of wireless federated learning with Over-the-Air (OtA) transmission of gradient updates from multiple users/agents to an edge server.
Traditional perturbation-based methods provide privacy protection while sacrificing the training accuracy.
In this work, we aim at minimizing privacy leakage to the adversary and the degradation of model accuracy at the edge server.
arXiv Detail & Related papers (2022-10-05T13:13:35Z) - Compression Boosts Differentially Private Federated Learning [0.7742297876120562]
Federated learning allows distributed entities to train a common model collaboratively without sharing their own data.
It remains vulnerable to various inference and reconstruction attacks where a malicious entity can learn private information about the participants' training data from the captured gradients.
We show experimentally, using 2 datasets, that our privacy-preserving proposal can reduce the communication costs by up to 95% with only a negligible performance penalty compared to traditional non-private federated learning schemes.
arXiv Detail & Related papers (2020-11-10T13:11:03Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.