Adap DP-FL: Differentially Private Federated Learning with Adaptive
Noise
- URL: http://arxiv.org/abs/2211.15893v1
- Date: Tue, 29 Nov 2022 03:20:40 GMT
- Title: Adap DP-FL: Differentially Private Federated Learning with Adaptive
Noise
- Authors: Jie Fu, Zhili Chen and Xiao Han
- Abstract summary: Federated learning seeks to address the issue of isolated data islands by making clients disclose only their local training models.
Recently, differential privacy has been applied to federated learning to protect data privacy, but the noise added may degrade the learning performance much.
We propose a differentially private scheme for federated learning with adaptive noise (Adap DP-FL)
- Score: 30.005017338416327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning seeks to address the issue of isolated data islands by
making clients disclose only their local training models. However, it was
demonstrated that private information could still be inferred by analyzing
local model parameters, such as deep neural network model weights. Recently,
differential privacy has been applied to federated learning to protect data
privacy, but the noise added may degrade the learning performance much.
Typically, in previous work, training parameters were clipped equally and
noises were added uniformly. The heterogeneity and convergence of training
parameters were simply not considered. In this paper, we propose a
differentially private scheme for federated learning with adaptive noise (Adap
DP-FL). Specifically, due to the gradient heterogeneity, we conduct adaptive
gradient clipping for different clients and different rounds; due to the
gradient convergence, we add decreasing noises accordingly. Extensive
experiments on real-world datasets demonstrate that our Adap DP-FL outperforms
previous methods significantly.
Related papers
- Rethinking Improved Privacy-Utility Trade-off with Pre-existing Knowledge for DP Training [31.559864332056648]
We propose a generic differential privacy framework with heterogeneous noise (DP-Hero)
Atop DP-Hero, we instantiate a heterogeneous version of DP-SGD, where the noise injected into gradient updates is heterogeneous and guided by prior-established model parameters.
We conduct comprehensive experiments to verify and explain the effectiveness of the proposed DP-Hero, showing improved training accuracy compared with state-of-the-art works.
arXiv Detail & Related papers (2024-09-05T08:40:54Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Balancing Privacy Protection and Interpretability in Federated Learning [8.759803233734624]
Federated learning (FL) aims to collaboratively train the global model in a distributed manner by sharing the model parameters from local clients to a central server.
Recent studies have illustrated that FL still suffers from information leakage as adversaries try to recover the training data by analyzing shared parameters from local clients.
We propose a simple yet effective adaptive differential privacy (ADP) mechanism that selectively adds noisy perturbations to the gradients of client models in FL.
arXiv Detail & Related papers (2023-02-16T02:58:22Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - On the Convergence and Calibration of Deep Learning with Differential
Privacy [12.297499996547925]
Differentially private (DP) training preserves the data privacy usually at the cost of slower convergence.
We show that noise addition only affects the privacy risk but not the convergence or calibration.
In sharp contrast, DP models trained with large clipping norm enjoy the same privacy guarantee and similar accuracy, but are significantly more textitd
arXiv Detail & Related papers (2021-06-15T01:32:29Z) - Differentially Private Deep Learning with Direct Feedback Alignment [15.410557873153833]
We propose the first differentially private method for training deep neural networks with direct feedback alignment (DFA)
DFA achieves significant gains in accuracy (often by 10-20%) compared to backprop-based differentially private training on a variety of architectures.
arXiv Detail & Related papers (2020-10-08T00:25:22Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.