Setting $\varepsilon$ is not the Issue in Differential Privacy
- URL: http://arxiv.org/abs/2511.06305v1
- Date: Sun, 09 Nov 2025 10:03:45 GMT
- Title: Setting $\varepsilon$ is not the Issue in Differential Privacy
- Authors: Edwige Cyffers,
- Abstract summary: The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy.<n>We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself.<n>We claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework.
- Score: 7.347270525437453
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This position paper argues that setting the privacy budget in differential privacy should not be viewed as an important limitation of differential privacy compared to alternative methods for privacy-preserving machine learning. The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy in real-world deployments and is sometimes used to promote alternative mitigation techniques for data protection. We believe this misleads decision-makers into choosing unsafe methods. We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself, but from the intrinsic difficulty of estimating privacy risks in context, a challenge that any rigorous method for privacy risk assessment face. Moreover, we claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework or justify why it cannot.
Related papers
- Your Privacy Depends on Others: Collusion Vulnerabilities in Individual Differential Privacy [50.66105844449181]
Individual Differential Privacy (iDP) promises users control over their privacy, but this promise can be broken in practice.<n>We reveal a previously overlooked vulnerability in sampling-based iDP mechanisms.<n>We propose $(varepsilon_i,_i,overline)$-iDP a privacy contract that uses $$-divergences to provide users with a hard upper bound on their excess vulnerability.
arXiv Detail & Related papers (2026-01-19T10:26:12Z) - How to Get Actual Privacy and Utility from Privacy Models: the k-Anonymity and Differential Privacy Families [3.9894389299295514]
Privacy models were introduced in privacy-preserving data publishing and statistical disclosure control.<n>We find they may fail to provide adequate protection guarantees because of problems in their definition.<n>We argue that a semantic reformulation of k-anonymity can offer more robust privacy without losing utility.
arXiv Detail & Related papers (2025-10-13T11:41:12Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - A Statistical Viewpoint on Differential Privacy: Hypothesis Testing, Representation and Blackwell's Theorem [30.365274034429508]
We argue that differential privacy can be considered a textitpure statistical concept.
$f$-differential privacy is a unified framework for analyzing privacy bounds in data analysis and machine learning.
arXiv Detail & Related papers (2024-09-14T23:47:22Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Privacy Amplification via Shuffling for Linear Contextual Bandits [51.94904361874446]
We study the contextual linear bandit problem with differential privacy (DP)
We show that it is possible to achieve a privacy/utility trade-off between JDP and LDP by leveraging the shuffle model of privacy.
Our result shows that it is possible to obtain a tradeoff between JDP and LDP by leveraging the shuffle model while preserving local privacy.
arXiv Detail & Related papers (2021-12-11T15:23:28Z) - Privately Publishable Per-instance Privacy [21.775752827149383]
We consider how to privately share the personalized privacy losses incurred by objective perturbation, using per-instance differential privacy (pDP)
We analyze the per-instance privacy loss of releasing a private empirical risk minimizer learned via objective perturbation, and propose a group of methods to privately and accurately publish the pDP losses at little to no additional privacy cost.
arXiv Detail & Related papers (2021-11-03T15:17:29Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Differential Privacy at Risk: Bridging Randomness and Privacy Budget [5.393465689287103]
We analyse roles of the sources of randomness, namely the explicit randomness induced by the noise distribution and the implicit randomness induced by the data-generation distribution.
We propose privacy at risk that is a probabilistic calibration of privacy-preserving mechanisms.
We show that composition using the cost optimal privacy at risk provides stronger privacy guarantee than the classical advanced composition.
arXiv Detail & Related papers (2020-03-02T15:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.