Improving Statistical Privacy by Subsampling
- URL: http://arxiv.org/abs/2504.11429v1
- Date: Tue, 15 Apr 2025 17:40:45 GMT
- Title: Improving Statistical Privacy by Subsampling
- Authors: Dennis Breutigam, RĂ¼diger Reischuk,
- Abstract summary: A privacy mechanism often used is to take samples of the data for answering a query.<n>This paper proves precise bounds how much different methods of sampling increase privacy in the statistical setting.<n>For the DP setting tradeoff functions have been proposed as a finer measure for privacy compared to (epsilon,delta)-pairs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential privacy (DP) considers a scenario, where an adversary has almost complete information about the entries of a database This worst-case assumption is likely to overestimate the privacy thread for an individual in real life. Statistical privacy (SP) denotes a setting where only the distribution of the database entries is known to an adversary, but not their exact values. In this case one has to analyze the interaction between noiseless privacy based on the entropy of distributions and privacy mechanisms that distort the answers of queries, which can be quite complex. A privacy mechanism often used is to take samples of the data for answering a query. This paper proves precise bounds how much different methods of sampling increase privacy in the statistical setting with respect to database size and sampling rate. They allow us to deduce when and how much sampling provides an improvement and how far this depends on the privacy parameter {\epsilon}. To perform these investigations we develop a framework to model sampling techniques. For the DP setting tradeoff functions have been proposed as a finer measure for privacy compared to ({\epsilon},{\delta})-pairs. We apply these tools to statistical privacy with subsampling to get a comparable characterization
Related papers
- Benchmarking Fraud Detectors on Private Graph Data [70.4654745317714]
Currently, many types of fraud are managed in part by automated detection algorithms that operate over graphs.<n>We consider the scenario where a data holder wishes to outsource development of fraud detectors to third parties.<n>Third parties submit their fraud detectors to the data holder, who evaluates these algorithms on a private dataset and then publicly communicates the results.<n>We propose a realistic privacy attack on this system that allows an adversary to de-anonymize individuals' data based only on the evaluation results.
arXiv Detail & Related papers (2025-07-30T03:20:15Z) - Statistical Privacy [0.0]
This paper considers a situation where an adversary knows the distribution by which the database is generated, but no exact data of its entries.<n>We analyze in detail how the entropy of the distribution guarantes privacy for a large class of queries called property queries.
arXiv Detail & Related papers (2025-01-22T14:13:44Z) - How Private are DP-SGD Implementations? [61.19794019914523]
We show that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
Our result shows that there can be a substantial gap between the privacy analysis when using the two types of batch sampling.
arXiv Detail & Related papers (2024-03-26T13:02:43Z) - Benchmarking Private Population Data Release Mechanisms: Synthetic Data vs. TopDown [50.40020716418472]
This study conducts a comparison between the TopDown algorithm and private synthetic data generation to determine how accuracy is affected by query complexity.
Our results show that for in-distribution queries, the TopDown algorithm achieves significantly better privacy-fidelity tradeoffs than any of the synthetic data methods we evaluated.
arXiv Detail & Related papers (2024-01-31T17:38:34Z) - Personalized Privacy Amplification via Importance Sampling [3.0636509793595548]
In this paper, we examine the privacy properties of importance sampling, focusing on an individualized privacy analysis.<n>We find that, in importance sampling, privacy is well aligned with utility but at odds with sample size.<n>We propose two approaches for constructing sampling distributions: one that optimize the privacy-efficiency trade-off; and one based on a utility guarantee in the form of coresets.
arXiv Detail & Related papers (2023-07-05T17:09:10Z) - Probing the Transition to Dataset-Level Privacy in ML Models Using an
Output-Specific and Data-Resolved Privacy Profile [23.05994842923702]
We study a privacy metric that quantifies the extent to which a model trained on a dataset using a Differential Privacy mechanism is covered" by each of the distributions resulting from training on neighboring datasets.
We show that the privacy profile can be used to probe an observed transition to indistinguishability that takes place in the neighboring distributions as $epsilon$ decreases.
arXiv Detail & Related papers (2023-06-27T20:39:07Z) - Membership Inference Attacks against Synthetic Data through Overfitting
Detection [84.02632160692995]
We argue for a realistic MIA setting that assumes the attacker has some knowledge of the underlying data distribution.
We propose DOMIAS, a density-based MIA model that aims to infer membership by targeting local overfitting of the generative model.
arXiv Detail & Related papers (2023-02-24T11:27:39Z) - On the Statistical Complexity of Estimation and Testing under Privacy Constraints [17.04261371990489]
We show how to characterize the power of a statistical test under differential privacy in a plug-and-play fashion.
We show that maintaining privacy results in a noticeable reduction in performance only when the level of privacy protection is very high.
Finally, we demonstrate that the DP-SGLD algorithm, a private convex solver, can be employed for maximum likelihood estimation with a high degree of confidence.
arXiv Detail & Related papers (2022-10-05T12:55:53Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Individual Privacy Accounting for Differentially Private Stochastic Gradient Descent [69.14164921515949]
We characterize privacy guarantees for individual examples when releasing models trained by DP-SGD.
We find that most examples enjoy stronger privacy guarantees than the worst-case bound.
This implies groups that are underserved in terms of model utility simultaneously experience weaker privacy guarantees.
arXiv Detail & Related papers (2022-06-06T13:49:37Z) - Smoothed Differential Privacy [55.415581832037084]
Differential privacy (DP) is a widely-accepted and widely-applied notion of privacy based on worst-case analysis.
In this paper, we propose a natural extension of DP following the worst average-case idea behind the celebrated smoothed analysis.
We prove that any discrete mechanism with sampling procedures is more private than what DP predicts, while many continuous mechanisms with sampling procedures are still non-private under smoothed DP.
arXiv Detail & Related papers (2021-07-04T06:55:45Z) - Oblivious Sampling Algorithms for Private Data Analysis [10.990447273771592]
We study secure and privacy-preserving data analysis based on queries executed on samples from a dataset.
Trusted execution environments (TEEs) can be used to protect the content of the data during query computation.
Supporting differential-private (DP) queries in TEEs provides record privacy when query output is revealed.
arXiv Detail & Related papers (2020-09-28T23:45:30Z) - Controlling Privacy Loss in Sampling Schemes: an Analysis of Stratified
and Cluster Sampling [23.256638764430516]
In this work, we extend the study of privacy amplification results to more complex, data-dependent sampling schemes.
We find that not only do these sampling schemes often fail to amplify privacy, they can actually result in privacy degradation.
arXiv Detail & Related papers (2020-07-24T17:43:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.