No free lunch theorem for security and utility in federated learning
- URL: http://arxiv.org/abs/2203.05816v1
- Date: Fri, 11 Mar 2022 09:48:29 GMT
- Title: No free lunch theorem for security and utility in federated learning
- Authors: Xiaojin Zhang, Hanlin Gu, Lixin Fan, Kai Chen, Qiang Yang
- Abstract summary: In a federated learning scenario where multiple parties jointly learn a model from their respective data, there exist two conflicting goals for the choice of appropriate algorithms.
This article illustrates a general framework that formulates the trade-off between privacy loss and utility loss from a unified information-theoretic point of view.
- Score: 20.481170500480395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In a federated learning scenario where multiple parties jointly learn a model
from their respective data, there exist two conflicting goals for the choice of
appropriate algorithms. On one hand, private and sensitive training data must
be kept secure as much as possible in the presence of \textit{semi-honest}
partners, while on the other hand, a certain amount of information has to be
exchanged among different parties for the sake of learning utility. Such a
challenge calls for the privacy-preserving federated learning solution, which
maximizes the utility of the learned model and maintains a provable privacy
guarantee of participating parties' private data.
This article illustrates a general framework that a) formulates the trade-off
between privacy loss and utility loss from a unified information-theoretic
point of view, and b) delineates quantitative bounds of privacy-utility
trade-off when different protection mechanisms including Randomization,
Sparsity, and Homomorphic Encryption are used. It was shown that in general
\textit{there is no free lunch for the privacy-utility trade-off} and one has
to trade the preserving of privacy with a certain degree of degraded utility.
The quantitative analysis illustrated in this article may serve as the guidance
for the design of practical federated learning algorithms.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Towards Split Learning-based Privacy-Preserving Record Linkage [49.1574468325115]
Split Learning has been introduced to facilitate applications where user data privacy is a requirement.
In this paper, we investigate the potentials of Split Learning for Privacy-Preserving Record Matching.
arXiv Detail & Related papers (2024-09-02T09:17:05Z) - Federated Transfer Learning with Differential Privacy [21.50525027559563]
We formulate the notion of textitfederated differential privacy, which offers privacy guarantees for each data set without assuming a trusted central server.
We show that federated differential privacy is an intermediate privacy model between the well-established local and central models of differential privacy.
arXiv Detail & Related papers (2024-03-17T21:04:48Z) - PrivacyMind: Large Language Models Can Be Contextual Privacy Protection Learners [81.571305826793]
We introduce Contextual Privacy Protection Language Models (PrivacyMind)
Our work offers a theoretical analysis for model design and benchmarks various techniques.
In particular, instruction tuning with both positive and negative examples stands out as a promising method.
arXiv Detail & Related papers (2023-10-03T22:37:01Z) - Independent Distribution Regularization for Private Graph Embedding [55.24441467292359]
Graph embeddings are susceptible to attribute inference attacks, which allow attackers to infer private node attributes from the learned graph embeddings.
To address these concerns, privacy-preserving graph embedding methods have emerged.
We propose a novel approach called Private Variational Graph AutoEncoders (PVGAE) with the aid of independent distribution penalty as a regularization term.
arXiv Detail & Related papers (2023-08-16T13:32:43Z) - Protecting Data from all Parties: Combining FHE and DP in Federated
Learning [0.09176056742068812]
We propose a secure framework addressing an extended threat model with respect to privacy of the training data.
The proposed framework protects the privacy of the training data from all participants, namely the training data owners and an aggregating server.
By means of a novel quantization operator, we prove differential privacy guarantees in a context where the noise is quantified and bounded due to the use of homomorphic encryption.
arXiv Detail & Related papers (2022-05-09T14:33:44Z) - Distributed Machine Learning and the Semblance of Trust [66.1227776348216]
Federated Learning (FL) allows the data owner to maintain data governance and perform model training locally without having to share their data.
FL and related techniques are often described as privacy-preserving.
We explain why this term is not appropriate and outline the risks associated with over-reliance on protocols that were not designed with formal definitions of privacy in mind.
arXiv Detail & Related papers (2021-12-21T08:44:05Z) - Communication-Computation Efficient Secure Aggregation for Federated
Learning [23.924656276456503]
Federated learning is a way to train neural networks using data distributed over multiple nodes without the need for the nodes to share data.
A recent solution based on the secure aggregation primitive enabled privacy-preserving federated learning, but at the expense of significant extra communication/computational resources.
We propose communication-computation efficient secure aggregation which substantially reduces the amount of communication/computational resources.
arXiv Detail & Related papers (2020-12-10T03:17:50Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z) - SPEED: Secure, PrivatE, and Efficient Deep learning [2.283665431721732]
We introduce a deep learning framework able to deal with strong privacy constraints.
Based on collaborative learning, differential privacy and homomorphic encryption, the proposed approach advances state-of-the-art.
arXiv Detail & Related papers (2020-06-16T19:31:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.