FeO2: Federated Learning with Opt-Out Differential Privacy
- URL: http://arxiv.org/abs/2110.15252v1
- Date: Thu, 28 Oct 2021 16:08:18 GMT
- Title: FeO2: Federated Learning with Opt-Out Differential Privacy
- Authors: Nasser Aldaghri, Hessam Mahdavifar, Ahmad Beirami
- Abstract summary: Federated learning (FL) is an emerging privacy-preserving paradigm, where a global model is trained at a central server while keeping client data local.
Differential privacy (DP) can be employed to provide privacy guarantees within FL, typically at the cost of degraded final trained model.
We propose a new algorithm for federated learning with opt-out DP, referred to as emphFeO2, along with a discussion on its advantages compared to the baselines of private and personalized FL algorithms.
- Score: 34.08435990347253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an emerging privacy-preserving paradigm, where a
global model is trained at a central server while keeping client data local.
However, FL can still indirectly leak private client information through model
updates during training. Differential privacy (DP) can be employed to provide
privacy guarantees within FL, typically at the cost of degraded final trained
model. In this work, we consider a heterogeneous DP setup where clients are
considered private by default, but some might choose to opt out of DP. We
propose a new algorithm for federated learning with opt-out DP, referred to as
\emph{FeO2}, along with a discussion on its advantages compared to the
baselines of private and personalized FL algorithms. We prove that the
server-side and client-side procedures in \emph{FeO2} are optimal for a
simplified linear problem. We also analyze the incentive for opting out of DP
in terms of performance gain. Through numerical experiments, we show that
\emph{FeO2} provides up to $9.27\%$ performance gain in the global model
compared to the baseline DP FL for the considered datasets. Additionally, we
show a gap in the average performance of personalized models between
non-private and private clients of up to $3.49\%$, empirically illustrating an
incentive for clients to opt out.
Related papers
- The Power of Bias: Optimizing Client Selection in Federated Learning with Heterogeneous Differential Privacy [38.55420329607416]
Both data quality and influence of DP noises should be taken into account when selecting clients.
An experiment results with real datasets under both convex and non- convex loss functions.
arXiv Detail & Related papers (2024-08-16T10:19:27Z) - DP-DyLoRA: Fine-Tuning Transformer-Based Models On-Device under Differentially Private Federated Learning using Dynamic Low-Rank Adaptation [15.023077875990614]
Federated learning (FL) allows clients to collaboratively train a global model without sharing their local data with a server.
Differential privacy (DP) addresses such leakage by providing formal privacy guarantees, with mechanisms that add randomness to the clients' contributions.
We propose an adaptation method that can be combined with differential privacy and call it DP-DyLoRA.
arXiv Detail & Related papers (2024-05-10T10:10:37Z) - MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Federated Learning with Differential Privacy for End-to-End Speech
Recognition [41.53948098243563]
Federated learning (FL) has emerged as a promising approach to train machine learning models.
We apply differential privacy (DP) to FL for automatic speech recognition (ASR)
We achieve user-level ($7.2$, $10-9$)-$textbfDP$ (resp. ($4.5$, $10-9$)-$textbfDP$ with a 1.3% (resp. 4.6%) absolute drop in the word error rate for extrapolation to high (resp. low) population scale for $textbfFL with DP in ASR
arXiv Detail & Related papers (2023-09-29T19:11:49Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Large Scale Transfer Learning for Differentially Private Image
Classification [51.10365553035979]
Differential Privacy (DP) provides a formal framework for training machine learning models with individual example level privacy.
Private training using DP-SGD protects against leakage by injecting noise into individual example gradients.
While this result is quite appealing, the computational cost of training large-scale models with DP-SGD is substantially higher than non-private training.
arXiv Detail & Related papers (2022-05-06T01:22:20Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - PFA: Privacy-preserving Federated Adaptation for Effective Model
Personalization [6.66389628571674]
Federated learning (FL) has become a prevalent distributed machine learning paradigm with improved privacy.
This paper introduces a new concept called federated adaptation, targeting at adapting the trained model in a federated manner to achieve better personalization results.
We propose PFA, a framework to accomplish Privacy-preserving Federated Adaptation.
arXiv Detail & Related papers (2021-03-02T08:07:34Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.