Gaussian Processes with Differential Privacy
- URL: http://arxiv.org/abs/2106.00474v1
- Date: Tue, 1 Jun 2021 13:23:16 GMT
- Title: Gaussian Processes with Differential Privacy
- Authors: Antti Honkela
- Abstract summary: We add strong privacy protection to Gaussian processes (GPs) via differential privacy (DP)
We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.
Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
- Score: 3.934224774675743
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gaussian processes (GPs) are non-parametric Bayesian models that are widely
used for diverse prediction tasks. Previous work in adding strong privacy
protection to GPs via differential privacy (DP) has been limited to protecting
only the privacy of the prediction targets (model outputs) but not inputs. We
break this limitation by introducing GPs with DP protection for both model
inputs and outputs. We achieve this by using sparse GP methodology and
publishing a private variational approximation on known inducing points. The
approximation covariance is adjusted to approximately account for the added
uncertainty from DP noise. The approximation can be used to compute arbitrary
predictions using standard sparse GP techniques. We propose a method for
hyperparameter learning using a private selection protocol applied to
validation set log-likelihood. Our experiments demonstrate that given
sufficient amount of data, the method can produce accurate models under strong
privacy protection.
Related papers
- Calibrating Noise for Group Privacy in Subsampled Mechanisms [24.518597984169734]
Group privacy (GP) is capable of protecting sensitive aggregate information of a group of up to m individuals.
GP is often treated as an afterthought, with most approaches treating it as a black box.
We propose a novel analysis framework that provides tight privacy accounting for subsampled GP mechanisms.
arXiv Detail & Related papers (2024-08-19T12:32:50Z) - Noise-Aware Differentially Private Regression via Meta-Learning [25.14514068630219]
Differential Privacy (DP) is the gold standard for protecting user privacy, but standard DP mechanisms significantly impair performance.
One approach to mitigating this issue is pre-training models on simulated data before DP learning on the private data.
In this work we go a step further, using simulated data to train a meta-learning model that combines the Convolutional Conditional Neural Process (ConvCNP) with an improved functional DP mechanism.
arXiv Detail & Related papers (2024-06-12T18:11:24Z) - Uncertainty quantification by block bootstrap for differentially private stochastic gradient descent [1.0742675209112622]
Gradient Descent (SGD) is a widely used tool in machine learning.
Uncertainty quantification (UQ) for SGD by bootstrap has been addressed by several authors.
We propose a novel block bootstrap for SGD under local differential privacy.
arXiv Detail & Related papers (2024-05-21T07:47:21Z) - Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - A Generalized Shuffle Framework for Privacy Amplification: Strengthening Privacy Guarantees and Enhancing Utility [4.7712438974100255]
We show how to shuffle $(epsilon_i,delta_i)$-PLDP setting with personalized privacy parameters.
We prove that shuffled $(epsilon_i,delta_i)$-PLDP process approximately preserves $mu$-Gaussian Differential Privacy with mu = sqrtfrac2sum_i=1n frac1-delta_i1+eepsilon_i-max_ifrac1-delta_i1+e
arXiv Detail & Related papers (2023-12-22T02:31:46Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Uncertainty quantification using martingales for misspecified Gaussian
processes [52.22233158357913]
We address uncertainty quantification for Gaussian processes (GPs) under misspecified priors.
We construct a confidence sequence (CS) for the unknown function using martingale techniques.
Our CS is statistically valid and empirically outperforms standard GP methods.
arXiv Detail & Related papers (2020-06-12T17:58:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.