Understanding the Interplay between Privacy and Robustness in Federated
Learning
- URL: http://arxiv.org/abs/2106.07033v1
- Date: Sun, 13 Jun 2021 16:01:35 GMT
- Title: Understanding the Interplay between Privacy and Robustness in Federated
Learning
- Authors: Yaowei Han, Yang Cao, Masatoshi Yoshikawa
- Abstract summary: Federated Learning (FL) is emerging as a promising paradigm of privacy-preserving machine learning.
Recent works highlighted several privacy and robustness weaknesses in FL.
It is still not clear how LDP affects adversarial robustness in FL.
- Score: 15.673448030003788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) is emerging as a promising paradigm of
privacy-preserving machine learning, which trains an algorithm across multiple
clients without exchanging their data samples. Recent works highlighted several
privacy and robustness weaknesses in FL and addressed these concerns using
local differential privacy (LDP) and some well-studied methods used in
conventional ML, separately. However, it is still not clear how LDP affects
adversarial robustness in FL. To fill this gap, this work attempts to develop a
comprehensive understanding of the effects of LDP on adversarial robustness in
FL. Clarifying the interplay is significant since this is the first step
towards a principled design of private and robust FL systems. We certify that
local differential privacy has both positive and negative effects on
adversarial robustness using theoretical analysis and empirical verification.
Related papers
- Privacy Attack in Federated Learning is Not Easy: An Experimental Study [5.065947993017158]
Federated learning (FL) is an emerging distributed machine learning paradigm proposed for privacy preservation.
Recent studies have indicated that FL cannot entirely guarantee privacy protection.
It remains uncertain whether privacy attack FL algorithms are effective in realistic federated environments.
arXiv Detail & Related papers (2024-09-28T10:06:34Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Accuracy-Privacy Trade-off in the Mitigation of Membership Inference Attack in Federated Learning [4.152322723065285]
federated learning (FL) has emerged as a prominent method in machine learning, emphasizing privacy preservation by allowing multiple clients to collaboratively build a model while keeping their training data private.
Despite this focus on privacy, FL models are susceptible to various attacks, including membership inference attacks (MIAs)
arXiv Detail & Related papers (2024-07-26T22:44:41Z) - Analysis of Privacy Leakage in Federated Large Language Models [18.332535398635027]
We study the privacy analysis of Federated Learning (FL) when used for training Large Language Models (LLMs)
In particular, we design two active membership inference attacks with guaranteed theoretical success rates to assess the privacy leakages of various adapted FL configurations.
Our theoretical findings are translated into practical attacks, revealing substantial privacy vulnerabilities in popular LLMs, including BERT, RoBERTa, DistilBERT, and OpenAI's GPTs.
arXiv Detail & Related papers (2024-03-02T20:25:38Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - FedLAP-DP: Federated Learning by Sharing Differentially Private Loss Approximations [53.268801169075836]
We propose FedLAP-DP, a novel privacy-preserving approach for federated learning.
A formal privacy analysis demonstrates that FedLAP-DP incurs the same privacy costs as typical gradient-sharing schemes.
Our approach presents a faster convergence speed compared to typical gradient-sharing methods.
arXiv Detail & Related papers (2023-02-02T12:56:46Z) - Federated Learning with Privacy-Preserving Ensemble Attention
Distillation [63.39442596910485]
Federated Learning (FL) is a machine learning paradigm where many local nodes collaboratively train a central model while keeping the training data decentralized.
We propose a privacy-preserving FL framework leveraging unlabeled public data for one-way offline knowledge distillation.
Our technique uses decentralized and heterogeneous local data like existing FL approaches, but more importantly, it significantly reduces the risk of privacy leakage.
arXiv Detail & Related papers (2022-10-16T06:44:46Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Local and Central Differential Privacy for Robustness and Privacy in
Federated Learning [13.115388879531967]
Federated Learning (FL) allows multiple participants to train machine learning models collaboratively by keeping their datasets local while only exchanging model updates.
This paper investigates whether and to what extent one can use differential Privacy (DP) to protect both privacy and robustness in FL.
arXiv Detail & Related papers (2020-09-08T07:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.