Individual Privacy Accounting via a Renyi Filter
- URL: http://arxiv.org/abs/2008.11193v4
- Date: Sat, 8 Jan 2022 21:35:43 GMT
- Title: Individual Privacy Accounting via a Renyi Filter
- Authors: Vitaly Feldman and Tijana Zrnic
- Abstract summary: We give a method for tighter privacy loss accounting based on the value of a personalized privacy loss estimate for each individual.
Our filter is simpler and tighter than the known filter for $(epsilon,delta)$-differential privacy by Rogers et al.
- Score: 33.65665839496798
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider a sequential setting in which a single dataset of individuals is
used to perform adaptively-chosen analyses, while ensuring that the
differential privacy loss of each participant does not exceed a pre-specified
privacy budget. The standard approach to this problem relies on bounding a
worst-case estimate of the privacy loss over all individuals and all possible
values of their data, for every single analysis. Yet, in many scenarios this
approach is overly conservative, especially for "typical" data points which
incur little privacy loss by participation in most of the analyses. In this
work, we give a method for tighter privacy loss accounting based on the value
of a personalized privacy loss estimate for each individual in each analysis.
To implement the accounting method we design a filter for R\'enyi differential
privacy. A filter is a tool that ensures that the privacy parameter of a
composed sequence of algorithms with adaptively-chosen privacy parameters does
not exceed a pre-specified budget. Our filter is simpler and tighter than the
known filter for $(\epsilon,\delta)$-differential privacy by Rogers et al. We
apply our results to the analysis of noisy gradient descent and show that
personalized accounting can be practical, easy to implement, and can only make
the privacy-utility tradeoff tighter.
Related papers
- Mean Estimation Under Heterogeneous Privacy Demands [5.755004576310333]
This work considers the problem of mean estimation, where each user can impose their own privacy level.
The algorithm we propose is shown to be minimax optimal and has a near-linear run-time.
Users with less but differing privacy requirements are all given more privacy than they require, in equal amounts.
arXiv Detail & Related papers (2023-10-19T20:29:19Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Individual Privacy Accounting with Gaussian Differential Privacy [8.81666701090743]
Individual privacy accounting enables bounding differential privacy (DP) loss individually for each participant involved in the analysis.
In order to account for the individual privacy losses in a principled manner, we need a privacy accountant for adaptive compositions of randomised mechanisms.
arXiv Detail & Related papers (2022-09-30T17:19:40Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Smooth Anonymity for Sparse Graphs [69.1048938123063]
differential privacy has emerged as the gold standard of privacy, however, when it comes to sharing sparse datasets.
In this work, we consider a variation of $k$-anonymity, which we call smooth-$k$-anonymity, and design simple large-scale algorithms that efficiently provide smooth-$k$-anonymity.
arXiv Detail & Related papers (2022-07-13T17:09:25Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Fully Adaptive Composition in Differential Privacy [53.01656650117495]
Well-known advanced composition theorems allow one to query a private database quadratically more times than basic privacy composition would permit.
We introduce fully adaptive composition, wherein both algorithms and their privacy parameters can be selected adaptively.
We construct filters that match the rates of advanced composition, including constants, despite allowing for adaptively chosen privacy parameters.
arXiv Detail & Related papers (2022-03-10T17:03:12Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z) - Bounding, Concentrating, and Truncating: Unifying Privacy Loss
Composition for Data Analytics [2.614355818010333]
We provide strong privacy loss bounds when an analyst may select pure DP, bounded range (e.g. exponential mechanisms) or concentrated DP mechanisms in any order.
We also provide optimal privacy loss bounds that apply when an analyst can select pure DP and bounded range mechanisms in a batch.
arXiv Detail & Related papers (2020-04-15T17:33:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.