Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy
- URL: http://arxiv.org/abs/2106.13673v1
- Date: Fri, 25 Jun 2021 14:47:19 GMT
- Title: Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy
- Authors: Xinwei Zhang, Xiangyi Chen, Mingyi Hong, Zhiwei Steven Wu and Jinfeng
Yi
- Abstract summary: This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
- Score: 67.4471689755097
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing privacy protection has been one of the primary motivations of
Federated Learning (FL). Recently, there has been a line of work on
incorporating the formal privacy notion of differential privacy with FL. To
guarantee the client-level differential privacy in FL algorithms, the clients'
transmitted model updates have to be clipped before adding privacy noise. Such
clipping operation is substantially different from its counterpart of gradient
clipping in the centralized differentially private SGD and has not been
well-understood. In this paper, we first empirically demonstrate that the
clipped FedAvg can perform surprisingly well even with substantial data
heterogeneity when training neural networks, which is partly because the
clients' updates become similar for several popular deep architectures. Based
on this key observation, we provide the convergence analysis of a differential
private (DP) FedAvg algorithm and highlight the relationship between clipping
bias and the distribution of the clients' updates. To the best of our
knowledge, this is the first work that rigorously investigates theoretical and
empirical issues regarding the clipping operation in FL algorithms.
Related papers
- Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - Provable Privacy Advantages of Decentralized Federated Learning via Distributed Optimization [16.418338197742287]
Federated learning (FL) emerged as a paradigm designed to improve data privacy by enabling data to reside at its source.
Recent findings suggest that decentralized FL does not empirically offer any additional privacy or security benefits over centralized models.
We demonstrate that decentralized FL, when deploying distributed optimization, provides enhanced privacy protection.
arXiv Detail & Related papers (2024-07-12T15:01:09Z) - Initialization Matters: Privacy-Utility Analysis of Overparameterized
Neural Networks [72.51255282371805]
We prove a privacy bound for the KL divergence between model distributions on worst-case neighboring datasets.
We find that this KL privacy bound is largely determined by the expected squared gradient norm relative to model parameters during training.
arXiv Detail & Related papers (2023-10-31T16:13:22Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - FedPerm: Private and Robust Federated Learning by Parameter Permutation [2.406359246841227]
Federated Learning (FL) is a distributed learning paradigm that enables mutually untrusting clients to collaboratively train a common machine learning model.
Client data privacy is paramount in FL. At the same time, the model must be protected from poisoning attacks from adversarial clients.
We present FedPerm, a new FL algorithm that addresses both these problems by combining a novel intra-model parameter shuffling technique that amplifies data privacy, with Private Information Retrieval (PIR) based techniques that permit cryptographic aggregation of clients' model updates.
arXiv Detail & Related papers (2022-08-16T19:40:28Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - Stochastic Coded Federated Learning with Convergence and Privacy
Guarantees [8.2189389638822]
Federated learning (FL) has attracted much attention as a privacy-preserving distributed machine learning framework.
This paper proposes a coded federated learning framework, namely coded federated learning (SCFL) to mitigate the straggler issue.
We characterize the privacy guarantee by the mutual information differential privacy (MI-DP) and analyze the convergence performance in federated learning.
arXiv Detail & Related papers (2022-01-25T04:43:29Z) - Federated Deep Learning with Bayesian Privacy [28.99404058773532]
Federated learning (FL) aims to protect data privacy by cooperatively learning a model without sharing private data among users.
Homomorphic encryption (HE) based methods provide secure privacy protections but suffer from extremely high computational and communication overheads.
Deep learning with Differential Privacy (DP) was implemented as a practical learning algorithm at a manageable cost in complexity.
arXiv Detail & Related papers (2021-09-27T12:48:40Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.