Federated Model Distillation with Noise-Free Differential Privacy
- URL: http://arxiv.org/abs/2009.05537v2
- Date: Fri, 21 May 2021 11:16:47 GMT
- Title: Federated Model Distillation with Noise-Free Differential Privacy
- Authors: Lichao Sun, Lingjuan Lyu
- Abstract summary: We propose a novel framework called FEDMD-NFDP, which applies a Noise-Free Differential Privacy (NFDP) mechanism into a federated model distillation framework.
Our extensive experimental results on various datasets validate that FEDMD-NFDP can deliver comparable utility and communication efficiency.
- Score: 35.72801867380072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Conventional federated learning directly averages model weights, which is
only possible for collaboration between models with homogeneous architectures.
Sharing prediction instead of weight removes this obstacle and eliminates the
risk of white-box inference attacks in conventional federated learning.
However, the predictions from local models are sensitive and would leak
training data privacy to the public. To address this issue, one naive approach
is adding the differentially private random noise to the predictions, which
however brings a substantial trade-off between privacy budget and model
performance. In this paper, we propose a novel framework called FEDMD-NFDP,
which applies a Noise-Free Differential Privacy (NFDP) mechanism into a
federated model distillation framework. Our extensive experimental results on
various datasets validate that FEDMD-NFDP can deliver not only comparable
utility and communication efficiency but also provide a noise-free differential
privacy guarantee. We also demonstrate the feasibility of our FEDMD-NFDP by
considering both IID and non-IID setting, heterogeneous model architectures,
and unlabelled public datasets from a different distribution.
Related papers
- Noise-Aware Differentially Private Variational Inference [5.4619385369457225]
Differential privacy (DP) provides robust privacy guarantees for statistical inference, but this can lead to unreliable results and biases in downstream applications.
We propose a novel method for noise-aware approximate Bayesian inference based on gradient variational inference.
We also propose a more accurate evaluation method for noise-aware posteriors.
arXiv Detail & Related papers (2024-10-25T08:18:49Z) - CorBin-FL: A Differentially Private Federated Learning Mechanism using Common Randomness [6.881974834597426]
Federated learning (FL) has emerged as a promising framework for distributed machine learning.
We introduce CorBin-FL, a privacy mechanism that uses correlated binary quantization to achieve differential privacy.
We also propose AugCorBin-FL, an extension that, in addition to PLDP, user-level and sample-level central differential privacy guarantees.
arXiv Detail & Related papers (2024-09-20T00:23:44Z) - Federated Causal Discovery from Heterogeneous Data [70.31070224690399]
We propose a novel FCD method attempting to accommodate arbitrary causal models and heterogeneous data.
These approaches involve constructing summary statistics as a proxy of the raw data to protect data privacy.
We conduct extensive experiments on synthetic and real datasets to show the efficacy of our method.
arXiv Detail & Related papers (2024-02-20T18:53:53Z) - Prototype-based Aleatoric Uncertainty Quantification for Cross-modal
Retrieval [139.21955930418815]
Cross-modal Retrieval methods build similarity relations between vision and language modalities by jointly learning a common representation space.
However, the predictions are often unreliable due to the Aleatoric uncertainty, which is induced by low-quality data, e.g., corrupt images, fast-paced videos, and non-detailed texts.
We propose a novel Prototype-based Aleatoric Uncertainty Quantification (PAU) framework to provide trustworthy predictions by quantifying the uncertainty arisen from the inherent data ambiguity.
arXiv Detail & Related papers (2023-09-29T09:41:19Z) - Differentially private partitioned variational inference [28.96767727430277]
Learning a privacy-preserving model from sensitive data which are distributed across multiple devices is an increasingly important problem.
We present differentially private partitioned variational inference, the first general framework for learning a variational approximation to a Bayesian posterior distribution.
arXiv Detail & Related papers (2022-09-23T13:58:40Z) - Don't Generate Me: Training Differentially Private Generative Models
with Sinkhorn Divergence [73.14373832423156]
We propose DP-Sinkhorn, a novel optimal transport-based generative method for learning data distributions from private data with differential privacy.
Unlike existing approaches for training differentially private generative models, we do not rely on adversarial objectives.
arXiv Detail & Related papers (2021-11-01T18:10:21Z) - PEARL: Data Synthesis via Private Embeddings and Adversarial
Reconstruction Learning [1.8692254863855962]
We propose a new framework of data using deep generative models in a differentially private manner.
Within our framework, sensitive data are sanitized with rigorous privacy guarantees in a one-shot fashion.
Our proposal has theoretical guarantees of performance, and empirical evaluations on multiple datasets show that our approach outperforms other methods at reasonable levels of privacy.
arXiv Detail & Related papers (2021-06-08T18:00:01Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.