Persuasive Privacy
- URL: http://arxiv.org/abs/2601.22945v1
- Date: Fri, 30 Jan 2026 13:03:21 GMT
- Title: Persuasive Privacy
- Authors: Joshua J Bon, James Bailie, Judith Rousseau, Christian P Robert,
- Abstract summary: We propose a novel framework for measuring privacy from a Bayesian game-theoretic perspective.<n>We show that pure and probabilistic differential privacy are special cases of our framework, and provide new interpretations of the post-processing inequality in this setting.
- Score: 1.3789489350166477
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a novel framework for measuring privacy from a Bayesian game-theoretic perspective. This framework enables the creation of new, purpose-driven privacy definitions that are rigorously justified, while also allowing for the assessment of existing privacy guarantees through game theory. We show that pure and probabilistic differential privacy are special cases of our framework, and provide new interpretations of the post-processing inequality in this setting. Further, we demonstrate that privacy guarantees can be established for deterministic algorithms, which are overlooked by current privacy standards.
Related papers
- Setting $\varepsilon$ is not the Issue in Differential Privacy [7.347270525437453]
The so-called problem of interpreting the privacy budget is often presented as a major hindrance to the wider adoption of differential privacy.<n>We argue that the difficulty in interpreting privacy budgets does not stem from the definition of differential privacy itself.<n>We claim that any sound method for estimating privacy risks should, given the current state of research, be expressible within the differential privacy framework.
arXiv Detail & Related papers (2025-11-09T10:03:45Z) - How to Get Actual Privacy and Utility from Privacy Models: the k-Anonymity and Differential Privacy Families [3.9894389299295514]
Privacy models were introduced in privacy-preserving data publishing and statistical disclosure control.<n>We find they may fail to provide adequate protection guarantees because of problems in their definition.<n>We argue that a semantic reformulation of k-anonymity can offer more robust privacy without losing utility.
arXiv Detail & Related papers (2025-10-13T11:41:12Z) - Urania: Differentially Private Insights into AI Use [102.27238986985698]
$Urania$ provides end-to-end privacy protection by leveraging DP tools such as clustering, partition selection, and histogram-based summarization.<n>Results show the framework's ability to extract meaningful conversational insights while maintaining stringent user privacy.
arXiv Detail & Related papers (2025-06-05T07:00:31Z) - Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Enhancing Feature-Specific Data Protection via Bayesian Coordinate Differential Privacy [55.357715095623554]
Local Differential Privacy (LDP) offers strong privacy guarantees without requiring users to trust external parties.
We propose a Bayesian framework, Bayesian Coordinate Differential Privacy (BCDP), that enables feature-specific privacy quantification.
arXiv Detail & Related papers (2024-10-24T03:39:55Z) - A Statistical Viewpoint on Differential Privacy: Hypothesis Testing, Representation and Blackwell's Theorem [30.365274034429508]
We argue that differential privacy can be considered a textitpure statistical concept.
$f$-differential privacy is a unified framework for analyzing privacy bounds in data analysis and machine learning.
arXiv Detail & Related papers (2024-09-14T23:47:22Z) - Models Matter: Setting Accurate Privacy Expectations for Local and Central Differential Privacy [14.40391109414476]
We design and evaluate new explanations of differential privacy for the local and central models.
We find that consequences-focused explanations in the style of privacy nutrition labels are a promising approach for setting accurate privacy expectations.
arXiv Detail & Related papers (2024-08-16T01:21:57Z) - A Randomized Approach for Tight Privacy Accounting [63.67296945525791]
We propose a new differential privacy paradigm called estimate-verify-release (EVR)
EVR paradigm first estimates the privacy parameter of a mechanism, then verifies whether it meets this guarantee, and finally releases the query output.
Our empirical evaluation shows the newly proposed EVR paradigm improves the utility-privacy tradeoff for privacy-preserving machine learning.
arXiv Detail & Related papers (2023-04-17T00:38:01Z) - Rethinking Disclosure Prevention with Pointwise Maximal Leakage [36.3895452861944]
We propose a general model of utility and privacy in which utility is achieved by disclosing the value of low-entropy features of a secret $X$.
We prove that, contrary to popular opinion, it is possible to provide meaningful inferential privacy guarantees.
We show that PML-based privacy is compatible with and provides insights into existing notions such as differential privacy.
arXiv Detail & Related papers (2023-03-14T10:47:40Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.