Renyi Differential Privacy of Propose-Test-Release and Applications to
Private and Robust Machine Learning
- URL: http://arxiv.org/abs/2209.07716v1
- Date: Fri, 16 Sep 2022 04:48:22 GMT
- Title: Renyi Differential Privacy of Propose-Test-Release and Applications to
Private and Robust Machine Learning
- Authors: Jiachen T. Wang, Saeed Mahloujifar, Shouda Wang, Ruoxi Jia, Prateek
Mittal
- Abstract summary: Propose-Test-Release (PTR) is a differential privacy framework that works with local sensitivity of functions, instead of their global sensitivity.
We show that PTR and our theoretical results can be used to design differentially private variants for byzantine robust training algorithms.
- Score: 34.091112544190125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Propose-Test-Release (PTR) is a differential privacy framework that works
with local sensitivity of functions, instead of their global sensitivity. This
framework is typically used for releasing robust statistics such as median or
trimmed mean in a differentially private manner. While PTR is a common
framework introduced over a decade ago, using it in applications such as robust
SGD where we need many adaptive robust queries is challenging. This is mainly
due to the lack of Renyi Differential Privacy (RDP) analysis, an essential
ingredient underlying the moments accountant approach for differentially
private deep learning. In this work, we generalize the standard PTR and derive
the first RDP bound for it when the target function has bounded global
sensitivity. We show that our RDP bound for PTR yields tighter DP guarantees
than the directly analyzed $(\eps, \delta)$-DP. We also derive the
algorithm-specific privacy amplification bound of PTR under subsampling. We
show that our bound is much tighter than the general upper bound and close to
the lower bound. Our RDP bounds enable tighter privacy loss calculation for the
composition of many adaptive runs of PTR. As an application of our analysis, we
show that PTR and our theoretical results can be used to design differentially
private variants for byzantine robust training algorithms that use robust
statistics for gradients aggregation. We conduct experiments on the settings of
label, feature, and gradient corruption across different datasets and
architectures. We show that PTR-based private and robust training algorithm
significantly improves the utility compared with the baseline.
Related papers
- Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Privacy Profiles for Private Selection [21.162924003105484]
We work out an easy-to-use recipe that bounds privacy profiles of ReportNoisyMax and PrivateTuning using the privacy profiles of the base algorithms they corral.
Our approach improves over all regimes of interest and leads to substantial benefits in end-to-end private learning experiments.
arXiv Detail & Related papers (2024-02-09T08:31:46Z) - Differentially Private SGD Without Clipping Bias: An Error-Feedback Approach [62.000948039914135]
Using Differentially Private Gradient Descent with Gradient Clipping (DPSGD-GC) to ensure Differential Privacy (DP) comes at the cost of model performance degradation.
We propose a new error-feedback (EF) DP algorithm as an alternative to DPSGD-GC.
We establish an algorithm-specific DP analysis for our proposed algorithm, providing privacy guarantees based on R'enyi DP.
arXiv Detail & Related papers (2023-11-24T17:56:44Z) - Generalized PTR: User-Friendly Recipes for Data-Adaptive Algorithms with
Differential Privacy [16.374676154278614]
''Propose-Test-Release'' (PTR) is a classic recipe for designing differentially private (DP) algorithms.
We extend PTR to a more general setting by privately testing data-dependent privacy losses rather than local sensitivity.
arXiv Detail & Related papers (2022-12-31T22:22:53Z) - Normalized/Clipped SGD with Perturbation for Differentially Private
Non-Convex Optimization [94.06564567766475]
DP-SGD and DP-NSGD mitigate the risk of large models memorizing sensitive training data.
We show that these two algorithms achieve similar best accuracy while DP-NSGD is comparatively easier to tune than DP-SGD.
arXiv Detail & Related papers (2022-06-27T03:45:02Z) - Selective Network Linearization for Efficient Private Inference [49.937470642033155]
We propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.
The results demonstrate up to $4.25%$ more accuracy (iso-ReLU count at 50K) or $2.2times$ less latency (iso-accuracy at 70%) than the current state of the art.
arXiv Detail & Related papers (2022-02-04T19:00:24Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Optimal Accounting of Differential Privacy via Characteristic Function [25.78065563380023]
We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP and the PLD formalism) via the characteristic function ($phi$-function) of a certain worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP, provides exactly tight privacy accounting like PLD, and can be (often losslessly) converted to privacy profile and $f$-DP.
arXiv Detail & Related papers (2021-06-16T06:13:23Z) - Three Variants of Differential Privacy: Lossless Conversion and
Applications [13.057076084452016]
We consider three different variants of differential privacy (DP), namely approximate DP, R'enyi RDP, and hypothesis test.
In the first part, we develop a machinery for relating approximate DP to iterations based on the joint range of two $f$-divergences.
As an application, we apply our result to the moments framework for characterizing privacy guarantees of noisy gradient descent.
arXiv Detail & Related papers (2020-08-14T18:23:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.