An information theorist's tour of differential privacy
- URL: http://arxiv.org/abs/2510.10316v1
- Date: Sat, 11 Oct 2025 18:54:05 GMT
- Title: An information theorist's tour of differential privacy
- Authors: Anand D. Sarwate, Flavio P. Calmon, Oliver Kosut, Lalitha Sankar,
- Abstract summary: differential privacy measures risk in terms of the differences between probability distributions.<n>A differentially private algorithm is a channel between the underlying data and the output of the analysis.
- Score: 15.893041475103765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Since being proposed in 2006, differential privacy has become a standard method for quantifying certain risks in publishing or sharing analyses of sensitive data. At its heart, differential privacy measures risk in terms of the differences between probability distributions, which is a central topic in information theory. A differentially private algorithm is a channel between the underlying data and the output of the analysis. Seen in this way, the guarantees made by differential privacy can be understood in terms of properties of this channel. In this article we examine a few of the key connections between information theory and the formulation/application of differential privacy, giving an ``operational significance'' for relevant information measures.
Related papers
- Bayesian Adversarial Privacy [0.048998185508205734]
This work introduces a new quantitative notion of privacy that is both contextual and specific.<n>We argue that it provides a more meaningful notion of privacy than the widely utilised framework of differential privacy.
arXiv Detail & Related papers (2026-03-04T15:46:24Z) - Differentially private ratio statistics [0.0]
We show that even a simple algorithm can provide excellent properties concerning privacy, sample accuracy, and bias.<n>Our approach bridges a gap in the differential privacy literature and provides a practical solution for ratio estimation in private machine learning pipelines.
arXiv Detail & Related papers (2025-05-26T04:28:27Z) - Differentially Private Random Feature Model [47.35176457481132]
We produce a differentially private random feature model for privacy-preserving kernel machines.<n>We show that our method preserves privacy and derive a generalization error bound for the method.
arXiv Detail & Related papers (2024-12-06T05:31:08Z) - Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Differentially Private Covariate Balancing Causal Inference [8.133739801185271]
Differential privacy is the leading mathematical framework for privacy protection.<n>Our algorithm produces both point and interval estimators with statistical guarantees, such as consistency and rate optimality, under a given privacy budget.
arXiv Detail & Related papers (2024-10-18T18:02:13Z) - Tangent differential privacy [13.796981813494199]
We propose a new form of differential privacy called tangent differential privacy.
Compared with the usual differential privacy that is defined uniformly across data distributions, tangent differential privacy is tailored towards a specific data distribution of interest.
arXiv Detail & Related papers (2024-06-06T22:11:31Z) - Improving the Variance of Differentially Private Randomized Experiments through Clustering [16.166525280886578]
We propose a new differentially private mechanism, "Cluster-DP"<n>We demonstrate that selecting higher-quality clusters, according to a quality metric we introduce, can decrease the variance penalty without compromising privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - Non-parametric Differentially Private Confidence Intervals for the
Median [3.205141100055992]
This paper proposes and evaluates several strategies to compute valid differentially private confidence intervals for the median.
We also illustrate that addressing both sources of uncertainty--the error from sampling and the error from protecting the output--should be preferred over simpler approaches that incorporate the uncertainty in a sequential fashion.
arXiv Detail & Related papers (2021-06-18T19:45:37Z) - Decision Making with Differential Privacy under a Fairness Lens [65.16089054531395]
The U.S. Census Bureau releases data sets and statistics about groups of individuals that are used as input to a number of critical decision processes.
To conform to privacy and confidentiality requirements, these agencies are often required to release privacy-preserving versions of the data.
This paper studies the release of differentially private data sets and analyzes their impact on some critical resource allocation tasks under a fairness perspective.
arXiv Detail & Related papers (2021-05-16T21:04:19Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.