Adaptive False Discovery Rate Control with Privacy Guarantee
- URL: http://arxiv.org/abs/2305.19482v1
- Date: Wed, 31 May 2023 01:22:15 GMT
- Title: Adaptive False Discovery Rate Control with Privacy Guarantee
- Authors: Xintao Xia and Zhanrui Cai
- Abstract summary: We propose a differentially private adaptive FDR control method that can control the classic FDR metric exactly at a user-specified level $alpha$ with privacy guarantee.
Compared to the non-private AdaPT, it incurs a small accuracy loss but significantly reduces the computation cost.
- Score: 1.4213973379473654
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differentially private multiple testing procedures can protect the
information of individuals used in hypothesis tests while guaranteeing a small
fraction of false discoveries. In this paper, we propose a differentially
private adaptive FDR control method that can control the classic FDR metric
exactly at a user-specified level $\alpha$ with privacy guarantee, which is a
non-trivial improvement compared to the differentially private
Benjamini-Hochberg method proposed in Dwork et al. (2021). Our analysis is
based on two key insights: 1) a novel p-value transformation that preserves
both privacy and the mirror conservative property, and 2) a mirror peeling
algorithm that allows the construction of the filtration and application of the
optimal stopping technique. Numerical studies demonstrate that the proposed
DP-AdaPT performs better compared to the existing differentially private FDR
control methods. Compared to the non-private AdaPT, it incurs a small accuracy
loss but significantly reduces the computation cost.
Related papers
- General-Purpose $f$-DP Estimation and Auditing in a Black-Box Setting [7.052531099272798]
We propose new methods to statistically assess $f$-Differential Privacy ($f$-DP)
A challenge when deploying differentially private mechanisms is that DP is hard to validate.
We introduce new black-box methods for $f$-DP that, unlike existing approaches for this privacy notion, do not require prior knowledge of the investigated algorithm.
arXiv Detail & Related papers (2025-02-10T21:58:17Z) - Differentially Private Random Feature Model [52.468511541184895]
We produce a differentially private random feature model for privacy-preserving kernel machines.
We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Minimax Optimal Two-Sample Testing under Local Differential Privacy [3.3317825075368908]
We explore the trade-off between privacy and statistical utility in private two-sample testing under local differential privacy (LDP)
We introduce private permutation tests using practical privacy mechanisms such as Laplace, discrete Laplace, and Google's RAPPOR.
We study continuous data via binning and study its uniform separation rates under LDP over H"older and Besov smoothness classes.
arXiv Detail & Related papers (2024-11-13T22:44:25Z) - Practical Privacy-Preserving Gaussian Process Regression via Secret
Sharing [23.80837224347696]
This paper proposes a privacy-preserving GPR method based on secret sharing (SS)
We derive a new SS-based exponentiation operation through the idea of 'confusion-correction' and construct an SS-based matrix inversion algorithm based on Cholesky decomposition.
Empirical results show that our proposed method can achieve reasonable accuracy and efficiency under the premise of preserving data privacy.
arXiv Detail & Related papers (2023-06-26T08:17:51Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Analyzing Privacy Leakage in Machine Learning via Multiple Hypothesis
Testing: A Lesson From Fano [83.5933307263932]
We study data reconstruction attacks for discrete data and analyze it under the framework of hypothesis testing.
We show that if the underlying private data takes values from a set of size $M$, then the target privacy parameter $epsilon$ can be $O(log M)$ before the adversary gains significant inferential power.
arXiv Detail & Related papers (2022-10-24T23:50:12Z) - No Free Lunch in "Privacy for Free: How does Dataset Condensation Help
Privacy" [75.98836424725437]
New methods designed to preserve data privacy require careful scrutiny.
Failure to preserve privacy is hard to detect, and yet can lead to catastrophic results when a system implementing a privacy-preserving'' method is attacked.
arXiv Detail & Related papers (2022-09-29T17:50:23Z) - Debugging Differential Privacy: A Case Study for Privacy Auditing [60.87570714269048]
We show that auditing can also be used to find flaws in (purportedly) differentially private schemes.
In this case study, we audit a recent open source implementation of a differentially private deep learning algorithm and find, with 99.99999999% confidence, that the implementation does not satisfy the claimed differential privacy guarantee.
arXiv Detail & Related papers (2022-02-24T17:31:08Z) - Gaussian Processes with Differential Privacy [3.934224774675743]
We add strong privacy protection to Gaussian processes (GPs) via differential privacy (DP)
We achieve this by using sparse GP methodology and publishing a private variational approximation on known inducing points.
Our experiments demonstrate that given sufficient amount of data, the method can produce accurate models under strong privacy protection.
arXiv Detail & Related papers (2021-06-01T13:23:16Z) - Learning with User-Level Privacy [61.62978104304273]
We analyze algorithms to solve a range of learning tasks under user-level differential privacy constraints.
Rather than guaranteeing only the privacy of individual samples, user-level DP protects a user's entire contribution.
We derive an algorithm that privately answers a sequence of $K$ adaptively chosen queries with privacy cost proportional to $tau$, and apply it to solve the learning tasks we consider.
arXiv Detail & Related papers (2021-02-23T18:25:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.