Differentially private partitioned variational inference
- URL: http://arxiv.org/abs/2209.11595v2
- Date: Tue, 18 Apr 2023 18:06:24 GMT
- Title: Differentially private partitioned variational inference
- Authors: Mikko A. Heikkil\"a, Matthew Ashman, Siddharth Swaroop, Richard E.
Turner and Antti Honkela
- Abstract summary: Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
- Score: 28.96767727430277
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning a privacy-preserving model from sensitive data which are distributed
across multiple devices is an increasingly important problem. The problem is
often formulated in the federated learning context, with the aim of learning a
single global model while keeping the data distributed. Moreover, Bayesian
learning is a popular approach for modelling, since it naturally supports
reliable uncertainty estimates. However, Bayesian learning is generally
intractable even with centralised non-private data and so approximation
techniques such as variational inference are a necessity. Variational inference
has recently been extended to the non-private federated learning setting via
the partitioned variational inference algorithm. For privacy protection, the
current gold standard is called differential privacy. Differential privacy
guarantees privacy in a strong, mathematically clearly defined sense.
In this paper, we present differentially private partitioned variational
inference, the first general framework for learning a variational approximation
to a Bayesian posterior distribution in the federated learning setting while
minimising the number of communication rounds and providing differential
privacy guarantees for data subjects.
We propose three alternative implementations in the general framework, one
based on perturbing local optimisation runs done by individual parties, and two
based on perturbing updates to the global model (one using a version of
federated averaging, the second one adding virtual parties to the protocol),
and compare their properties both theoretically and empirically.
Related papers
- Differentially Private Federated Learning: Servers Trustworthiness, Estimation, and Statistical Inference [18.97060758177909]
This paper investigates the challenges of high-dimensional estimation and inference under the constraints of differential privacy.
We introduce a novel federated estimation algorithm tailored for linear regression models.
We also propose methods for statistical inference, including coordinate-wise confidence intervals for individual parameters.
arXiv Detail & Related papers (2024-04-25T02:14:07Z) - Generalizable Heterogeneous Federated Cross-Correlation and Instance
Similarity Learning [60.058083574671834]
This paper presents a novel FCCL+, federated correlation and similarity learning with non-target distillation.
For heterogeneous issue, we leverage irrelevant unlabeled public data for communication.
For catastrophic forgetting in local updating stage, FCCL+ introduces Federated Non Target Distillation.
arXiv Detail & Related papers (2023-09-28T09:32:27Z) - Data Analytics with Differential Privacy [0.0]
We develop differentially private algorithms to analyze distributed and streaming data.
In the distributed model, we consider the particular problem of learning -- in a distributed fashion -- a global model of the data.
We offer one of the strongest privacy guarantees for the streaming model, user-level pan-privacy.
arXiv Detail & Related papers (2023-07-20T17:43:29Z) - Personalized Graph Federated Learning with Differential Privacy [6.282767337715445]
This paper presents a personalized graph federated learning (PGFL) framework in which distributedly connected servers and their respective edge devices collaboratively learn device or cluster-specific models.
We study a variant of the PGFL implementation that utilizes differential privacy, specifically zero-concentrated differential privacy, where a noise sequence perturbs model exchanges.
Our analysis shows that the algorithm ensures local differential privacy for all clients in terms of zero-concentrated differential privacy.
arXiv Detail & Related papers (2023-06-10T09:52:01Z) - Personalization Improves Privacy-Accuracy Tradeoffs in Federated
Optimization [57.98426940386627]
We show that coordinating local learning with private centralized learning yields a generically useful and improved tradeoff between accuracy and privacy.
We illustrate our theoretical results with experiments on synthetic and real-world datasets.
arXiv Detail & Related papers (2022-02-10T20:44:44Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - Understanding Clipping for Federated Learning: Convergence and
Client-Level Differential Privacy [67.4471689755097]
This paper empirically demonstrates that the clipped FedAvg can perform surprisingly well even with substantial data heterogeneity.
We provide the convergence analysis of a differential private (DP) FedAvg algorithm and highlight the relationship between clipping bias and the distribution of the clients' updates.
arXiv Detail & Related papers (2021-06-25T14:47:19Z) - Graph-Homomorphic Perturbations for Private Decentralized Learning [64.26238893241322]
Local exchange of estimates allows inference of data based on private data.
perturbations chosen independently at every agent, resulting in a significant performance loss.
We propose an alternative scheme, which constructs perturbations according to a particular nullspace condition, allowing them to be invisible.
arXiv Detail & Related papers (2020-10-23T10:35:35Z) - LDP-FL: Practical Private Aggregation in Federated Learning with Local
Differential Privacy [20.95527613004989]
Federated learning is a popular approach for privacy protection that collects the local gradient information instead of real data.
Previous works do not give a practical solution due to three issues.
Last, the privacy budget explodes due to the high dimensionality of weights in deep learning models.
arXiv Detail & Related papers (2020-07-31T01:08:57Z) - Differentially private cross-silo federated learning [16.38610531397378]
Strict privacy is of paramount importance in distributed machine learning.
In this paper we combine additively homomorphic secure summation protocols with differential privacy in the so-called cross-silo federated learning setting.
We demonstrate that our proposed solutions give prediction accuracy that is comparable to the non-distributed setting.
arXiv Detail & Related papers (2020-07-10T18:15:10Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.