Practical Privacy Filters and Odometers with R\'enyi Differential
Privacy and Applications to Differentially Private Deep Learning
- URL: http://arxiv.org/abs/2103.01379v1
- Date: Tue, 2 Mar 2021 00:37:11 GMT
- Title: Practical Privacy Filters and Odometers with R\'enyi Differential
Privacy and Applications to Differentially Private Deep Learning
- Authors: Mathias L\'ecuyer
- Abstract summary: We study DP composition under adaptive privacy budgets through the lens of R'enyi Differential Privacy.
We prove a simpler composition theorem with smaller constants, making it practical enough to use in algorithm design.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential Privacy (DP) is the leading approach to privacy preserving deep
learning. As such, there are multiple efforts to provide drop-in integration of
DP into popular frameworks. These efforts, which add noise to each gradient
computation to make it DP, rely on composition theorems to bound the total
privacy loss incurred over this sequence of DP computations.
However, existing composition theorems present a tension between efficiency
and flexibility. Most theorems require all computations in the sequence to have
a predefined DP parameter, called the privacy budget. This prevents the design
of training algorithms that adapt the privacy budget on the fly, or that
terminate early to reduce the total privacy loss. Alternatively, the few
existing composition results for adaptive privacy budgets provide complex
bounds on the privacy loss, with constants too large to be practical.
In this paper, we study DP composition under adaptive privacy budgets through
the lens of R\'enyi Differential Privacy, proving a simpler composition theorem
with smaller constants, making it practical enough to use in algorithm design.
We demonstrate two applications of this theorem for DP deep learning: adapting
the noise or batch size online to improve a model's accuracy within a fixed
total privacy loss, and stopping early when fine-tuning a model to reduce total
privacy loss.
Related papers
- Noise Variance Optimization in Differential Privacy: A Game-Theoretic Approach Through Per-Instance Differential Privacy [7.264378254137811]
Differential privacy (DP) can measure privacy loss by observing the changes in the distribution caused by the inclusion of individuals in the target dataset.
DP has been prominent in safeguarding datasets in machine learning in industry giants like Apple and Google.
We propose per-instance DP (pDP) as a constraint, measuring privacy loss for each data instance and optimizing noise tailored to individual instances.
arXiv Detail & Related papers (2024-04-24T06:51:16Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - Shifted Interpolation for Differential Privacy [6.1836947007564085]
Noisy gradient descent and its variants are the predominant algorithms for differentially private machine learning.
This paper establishes the "privacy amplification by corollary" phenomenon in the unifying framework of $f$-differential privacy.
Notably, this leads to the first exact privacy analysis in the foundational setting of strongly convex optimization.
arXiv Detail & Related papers (2024-03-01T04:50:04Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Fully Adaptive Composition in Differential Privacy [53.01656650117495]
Well-known advanced composition theorems allow one to query a private database quadratically more times than basic privacy composition would permit.
We introduce fully adaptive composition, wherein both algorithms and their privacy parameters can be selected adaptively.
We construct filters that match the rates of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.
arXiv Detail & Related papers (2022-03-10T17:03:12Z) - Differentially Private Federated Bayesian Optimization with Distributed
Exploration [48.9049546219643]
We introduce differential privacy (DP) into the training of deep neural networks through a general framework for adding DP to iterative algorithms.
We show that DP-FTS-DE achieves high utility (competitive performance) with a strong privacy guarantee.
We also use real-world experiments to show that DP-FTS-DE induces a trade-off between privacy and utility.
arXiv Detail & Related papers (2021-10-27T04:11:06Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Optimal Accounting of Differential Privacy via Characteristic Function [25.78065563380023]
We propose a unification of recent advances (Renyi DP, privacy profiles, $f$-DP and the PLD formalism) via the characteristic function ($phi$-function) of a certain worst-case'' privacy loss random variable.
We show that our approach allows natural adaptive composition like Renyi DP, provides exactly tight privacy accounting like PLD, and can be (often losslessly) converted to privacy profile and $f$-DP.
arXiv Detail & Related papers (2021-06-16T06:13:23Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.