A Randomized Approach for Tight Privacy Accounting
- URL: http://arxiv.org/abs/2304.07927v2
- Date: Tue, 21 Nov 2023 02:15:33 GMT
- Title: A Randomized Approach for Tight Privacy Accounting
- Authors: Jiachen T. Wang, Saeed Mahloujifar, Tong Wu, Ruoxi Jia, Prateek Mittal
- Abstract summary: We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
- Score: 63.67296945525791
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Bounding privacy leakage over compositions, i.e., privacy accounting, is a
key challenge in differential privacy (DP). The privacy parameter ($\eps$ or
$\delta$) is often easy to estimate but hard to bound. In this paper, we
propose a new differential privacy paradigm called estimate-verify-release
(EVR), which addresses the challenges of providing a strict upper bound for
privacy parameter in DP compositions by converting an estimate of privacy
parameter into a formal guarantee. The EVR paradigm first estimates the privacy
parameter of a mechanism, then verifies whether it meets this guarantee, and
finally releases the query output based on the verification result. The core
component of the EVR is privacy verification. We develop a randomized privacy
verifier using Monte Carlo (MC) technique. Furthermore, we propose an MC-based
DP accountant that outperforms existing DP accounting techniques in terms of
accuracy and efficiency. Our empirical evaluation shows the newly proposed EVR
paradigm improves the utility-privacy tradeoff for privacy-preserving machine
learning.
Related papers
- Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - Provable Privacy with Non-Private Pre-Processing [56.770023668379615]
We propose a general framework to evaluate the additional privacy cost incurred by non-private data-dependent pre-processing algorithms.
Our framework establishes upper bounds on the overall privacy guarantees by utilising two new technical notions.
arXiv Detail & Related papers (2024-03-19T17:54:49Z) - A Learning-based Declarative Privacy-Preserving Framework for Federated Data Management [23.847568516724937]
We introduce a new privacy-preserving technique that uses a deep learning model trained using Differentially-Private Descent (DP-SGD) algorithm.
We then demonstrate a novel declarative privacy-preserving workflow that allows users to specify "what private information to protect" rather than "how to protect"
arXiv Detail & Related papers (2024-01-22T22:50:59Z) - Decentralized Matrix Factorization with Heterogeneous Differential
Privacy [2.4743508801114444]
We propose a novel Heterogeneous Differentially Private Matrix Factorization algorithm (denoted as HDPMF) for untrusted recommender.
Our framework uses modified stretching mechanism with an innovative rescaling scheme to achieve better trade off between privacy and accuracy.
arXiv Detail & Related papers (2022-12-01T06:48:18Z) - A Unified Approach to Differentially Private Bayes Point Estimation [7.599399338954307]
emphdifferential privacy (DP) has been proposed, which enforces confidentiality by introducing randomization in the estimates.
Standard algorithms for differentially private estimation are based on adding an appropriate amount of noise to the output of a traditional point estimation method.
We propose a new Unified Bayes Private Point (UBaPP) approach to Bayes point estimation of the unknown parameters of a data generating mechanism under a DP constraint.
arXiv Detail & Related papers (2022-11-18T16:42:49Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Privacy Amplification via Shuffled Check-Ins [2.3333090554192615]
We study a protocol for distributed computation called shuffled check-in.
It achieves strong privacy guarantees without requiring any further trust assumptions beyond a trusted shuffler.
We show that shuffled check-in achieves tight privacy guarantees through privacy amplification.
arXiv Detail & Related papers (2022-06-07T09:55:15Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.