Trading Off Privacy, Utility and Efficiency in Federated Learning
- URL: http://arxiv.org/abs/2209.00230v1
- Date: Thu, 1 Sep 2022 05:20:04 GMT
- Title: Trading Off Privacy, Utility and Efficiency in Federated Learning
- Authors: Xiaojin Zhang, Yan Kang, Kai Chen, Lixin Fan, Qiang Yang
- Abstract summary: We formulate and quantify the trade-offs between privacy leakage, utility loss, and efficiency reduction.
We analyze the lower bounds for the privacy leakage, utility loss and efficiency reduction for several widely-adopted protection mechanisms.
- Score: 22.53326117450263
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) enables participating parties to collaboratively
build a global model with boosted utility without disclosing private data
information. Appropriate protection mechanisms have to be adopted to fulfill
the opposing requirements in preserving \textit{privacy} and maintaining high
model \textit{utility}. In addition, it is a mandate for a federated learning
system to achieve high \textit{efficiency} in order to enable large-scale model
training and deployment. We propose a unified federated learning framework that
reconciles horizontal and vertical federated learning. Based on this framework,
we formulate and quantify the trade-offs between privacy leakage, utility loss,
and efficiency reduction, which leads us to the No-Free-Lunch (NFL) theorem for
the federated learning system. NFL indicates that it is unrealistic to expect
an FL algorithm to simultaneously provide excellent privacy, utility, and
efficiency in certain scenarios. We then analyze the lower bounds for the
privacy leakage, utility loss and efficiency reduction for several
widely-adopted protection mechanisms including \textit{Randomization},
\textit{Homomorphic Encryption}, \textit{Secret Sharing} and
\textit{Compression}. Our analysis could serve as a guide for selecting
protection parameters to meet particular requirements.
Related papers
- Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - UFed-GAN: A Secure Federated Learning Framework with Constrained
Computation and Unlabeled Data [50.13595312140533]
We propose a novel framework of UFed-GAN: Unsupervised Federated Generative Adversarial Network, which can capture user-side data distribution without local classification training.
Our experimental results demonstrate the strong potential of UFed-GAN in addressing limited computational resources and unlabeled data while preserving privacy.
arXiv Detail & Related papers (2023-08-10T22:52:13Z) - Binary Federated Learning with Client-Level Differential Privacy [7.854806519515342]
Federated learning (FL) is a privacy-preserving collaborative learning framework.
Existing FL systems typically adopt Federated Average (FedAvg) as the training algorithm.
We propose a communication-efficient FL training algorithm with differential privacy guarantee.
arXiv Detail & Related papers (2023-08-07T06:07:04Z) - A Meta-learning Framework for Tuning Parameters of Protection Mechanisms
in Trustworthy Federated Learning [27.909662318838873]
Trustworthy Federated Learning (TFL) typically leverages protection mechanisms to guarantee privacy.
We propose a framework that formulates TFL as a problem of finding a protection mechanism to optimize the tradeoff between privacy leakage, utility loss, and efficiency reduction.
arXiv Detail & Related papers (2023-05-28T15:01:18Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Towards Achieving Near-optimal Utility for Privacy-Preserving Federated
Learning via Data Generation and Parameter Distortion [19.691227962303515]
Federated learning (FL) enables participating parties to collaboratively build a global model with boosted utility without disclosing private data information.
Various protection mechanisms have to be adopted to fulfill the requirements in preserving textitprivacy and maintaining high model textitutility
arXiv Detail & Related papers (2023-05-07T14:34:15Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - No free lunch theorem for security and utility in federated learning [20.481170500480395]
In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
This article illustrates a general framework that formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view.
arXiv Detail & Related papers (2022-03-11T09:48:29Z) - Secure Bilevel Asynchronous Vertical Federated Learning with Backward
Updating [159.48259714642447]
Vertical scalable learning (VFL) attracts increasing attention due to the demands of multi-party collaborative modeling and concerns of privacy leakage.
We propose a novel bftextlevel parallel architecture (VF$bfB2$), under which three new algorithms, including VF$B2$, are proposed.
arXiv Detail & Related papers (2021-03-01T12:34:53Z) - Large-Scale Secure XGB for Vertical Federated Learning [15.864654742542246]
In this paper, we aim to build large-scale secure XGB under vertically federated learning setting.
We employ secure multi-party computation techniques to avoid leaking intermediate information during training.
By proposing secure permutation protocols, we can improve the training efficiency and make the framework scale to large dataset.
arXiv Detail & Related papers (2020-05-18T06:31:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.