Locally Differentially Private Bayesian Inference
- URL: http://arxiv.org/abs/2110.14426v1
- Date: Wed, 27 Oct 2021 13:36:43 GMT
- Title: Locally Differentially Private Bayesian Inference
- Authors: Tejas Kulkarni, Joonas J\"alk\"o, Samuel Kaski, Antti Honkela
- Abstract summary: Local differential privacy (LDP) has emerged as a technique of choice for privacy-preserving data collection in several scenarios when the aggregator is not trustworthy.
We provide a noise-aware probabilistic modeling framework, which allows Bayesian inference to take into account the noise added for privacy under LDP.
- Score: 23.882144188177275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, local differential privacy (LDP) has emerged as a technique
of choice for privacy-preserving data collection in several scenarios when the
aggregator is not trustworthy. LDP provides client-side privacy by adding noise
at the user's end. Thus, clients need not rely on the trustworthiness of the
aggregator.
In this work, we provide a noise-aware probabilistic modeling framework,
which allows Bayesian inference to take into account the noise added for
privacy under LDP, conditioned on locally perturbed observations. Stronger
privacy protection (compared to the central model) provided by LDP protocols
comes at a much harsher privacy-utility trade-off. Our framework tackles
several computational and statistical challenges posed by LDP for accurate
uncertainty quantification under Bayesian settings. We demonstrate the efficacy
of our framework in parameter estimation for univariate and multi-variate
distributions as well as logistic and linear regression.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Convergent Differential Privacy Analysis for General Federated Learning: the $f$-DP Perspective [57.35402286842029]
Federated learning (FL) is an efficient collaborative training paradigm with a focus on local privacy.
differential privacy (DP) is a classical approach to capture and ensure the reliability of private protections.
arXiv Detail & Related papers (2024-08-28T08:22:21Z) - Enhanced Privacy Bound for Shuffle Model with Personalized Privacy [32.08637708405314]
Differential Privacy (DP) is an enhanced privacy protocol which introduces an intermediate trusted server between local users and a central data curator.
It significantly amplifies the central DP guarantee by anonymizing and shuffling the local randomized data.
This work focuses on deriving the central privacy bound for a more practical setting where personalized local privacy is required by each user.
arXiv Detail & Related papers (2024-07-25T16:11:56Z) - Noise-Aware Algorithm for Heterogeneous Differentially Private Federated Learning [21.27813247914949]
We propose Robust-HDP, which efficiently estimates the true noise level in clients model updates.
It improves utility and convergence speed, while being safe to the clients that may maliciously send falsified privacy parameter to server.
arXiv Detail & Related papers (2024-06-05T17:41:42Z) - Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Private and Utility Enhanced Recommendations with Local Differential
Privacy and Gaussian Mixture Model [14.213973630742666]
Local differential privacy (LDP) based perturbation mechanisms add noise to users data at user side before sending it to the Service Providers (SP)
Although LDP protects the privacy of users from SP, it causes a substantial decline in predictive accuracy.
Our proposed LDP based recommendation system improves the recommendation accuracy without violating LDP principles.
arXiv Detail & Related papers (2021-02-26T13:15:23Z) - RDP-GAN: A R\'enyi-Differential Privacy based Generative Adversarial
Network [75.81653258081435]
Generative adversarial network (GAN) has attracted increasing attention recently owing to its impressive ability to generate realistic samples with high privacy protection.
However, when GANs are applied on sensitive or private training examples, such as medical or financial records, it is still probable to divulge individuals' sensitive and private information.
We propose a R'enyi-differentially private-GAN (RDP-GAN), which achieves differential privacy (DP) in a GAN by carefully adding random noises on the value of the loss function during training.
arXiv Detail & Related papers (2020-07-04T09:51:02Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.