From Statistical Disclosure Control to Fair AI: Navigating Fundamental Tradeoffs in Differential Privacy
- URL: http://arxiv.org/abs/2601.17909v1
- Date: Sun, 25 Jan 2026 17:07:00 GMT
- Title: From Statistical Disclosure Control to Fair AI: Navigating Fundamental Tradeoffs in Differential Privacy
- Authors: Adriana Watson,
- Abstract summary: Differential privacy has become the gold standard for privacy-preserving machine learning systems.<n>This paper provides a systematic treatment connecting three threads: Dalenius's impossibility results for semantic privacy, Dwork's differential privacy as an achievable alternative, and emerging impossibility results from the addition of a fairness requirement.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differential privacy has become the gold standard for privacy-preserving machine learning systems. Unfortunately, subsequent work has primarily fixated on the privacy-utility tradeoff, leaving the subject of fairness constraints undervalued and under-researched. This paper provides a systematic treatment connecting three threads: (1) Dalenius's impossibility results for semantic privacy, (2) Dwork's differential privacy as an achievable alternative, and (3) emerging impossibility results from the addition of a fairness requirement. Through concrete examples and technical analysis, the three-way Pareto frontier between privacy, utility, and fairness is demonstrated to showcase the fundamental limits on what can be simultaneously achieved. In this work, these limits are characterized, the impact on minority groups is demonstrated, and practical guidance for navigating these tradeoffs are provided. This forms a unified framework synthesizing scattered results to help practitioners and policymakers make informed decisions when deploying private fair learning systems.
Related papers
- Fairness Meets Privacy: Integrating Differential Privacy and Demographic Parity in Multi-class Classification [6.28122931748758]
We show that differential privacy can be integrated into a fairness-enhancing pipeline with minimal impact on fairness guarantees.<n>We design a postprocessing algorithm, called DP2DP, that enforces both demographic parity and differential privacy.<n>Our analysis reveals that our algorithm converges towards its demographic parity objective at essentially the same rate as the best non-private methods from the literature.
arXiv Detail & Related papers (2025-11-24T08:31:02Z) - FAIRPLAI: A Human-in-the-Loop Approach to Fair and Private Machine Learning [0.09999629695552194]
We introduce FAIRPLAI, a framework that integrates human oversight into the design and deployment of machine learning systems.<n>Fair and Private Learning with Active Human Influence integrates human oversight into the design and deployment of machine learning systems.<n>Fairplai consistently preserves strong privacy protections while reducing fairness disparities relative to automated baselines.
arXiv Detail & Related papers (2025-11-11T19:07:46Z) - Differential Privacy in Machine Learning: From Symbolic AI to LLMs [49.1574468325115]
Differential privacy provides a formal framework to mitigate privacy risks.<n>It ensures that the inclusion or exclusion of any single data point does not significantly alter the output of an algorithm.
arXiv Detail & Related papers (2025-06-13T11:30:35Z) - Democratizing Differential Privacy: A Participatory AI Framework for Public Decision-Making [2.1967674611287444]
This paper introduces a conversational interface system that enables participatory design of differentially private AI systems in public sector applications.<n>Our results advance participatory AI practices by demonstrating how conversational interfaces can enhance public engagement in algorithmic privacy mechanisms.
arXiv Detail & Related papers (2025-04-30T04:10:50Z) - Differential Privacy Overview and Fundamental Techniques [63.0409690498569]
This chapter is meant to be part of the book "Differential Privacy in Artificial Intelligence: From Theory to Practice"
It starts by illustrating various attempts to protect data privacy, emphasizing where and why they failed.
It then defines the key actors, tasks, and scopes that make up the domain of privacy-preserving data analysis.
arXiv Detail & Related papers (2024-11-07T13:52:11Z) - Activity Recognition on Avatar-Anonymized Datasets with Masked Differential Privacy [64.32494202656801]
Privacy-preserving computer vision is an important emerging problem in machine learning and artificial intelligence.<n>We present anonymization pipeline that replaces sensitive human subjects in video datasets with synthetic avatars within context.<n>We also proposeMaskDP to protect non-anonymized but privacy sensitive background information.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Centering Policy and Practice: Research Gaps around Usable Differential Privacy [12.340264479496375]
We argue that while differential privacy is a clean formulation in theory, it poses significant challenges in practice.
To bridge the gaps between differential privacy's promises and its real-world usability, researchers and practitioners must work together.
arXiv Detail & Related papers (2024-06-17T21:32:30Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Breaking the Communication-Privacy-Accuracy Tradeoff with
$f$-Differential Privacy [51.11280118806893]
We consider a federated data analytics problem in which a server coordinates the collaborative data analysis of multiple users with privacy concerns and limited communication capability.
We study the local differential privacy guarantees of discrete-valued mechanisms with finite output space through the lens of $f$-differential privacy (DP)
More specifically, we advance the existing literature by deriving tight $f$-DP guarantees for a variety of discrete-valued mechanisms.
arXiv Detail & Related papers (2023-02-19T16:58:53Z) - Tight Auditing of Differentially Private Machine Learning [77.38590306275877]
For private machine learning, existing auditing mechanisms are tight.
They only give tight estimates under implausible worst-case assumptions.
We design an improved auditing scheme that yields tight privacy estimates for natural (not adversarially crafted) datasets.
arXiv Detail & Related papers (2023-02-15T21:40:33Z) - Privacy and Bias Analysis of Disclosure Avoidance Systems [45.645473465606564]
Disclosure avoidance (DA) systems are used to safeguard the confidentiality of data while allowing it to be analyzed and disseminated for analytic purposes.
This paper presents a framework that addresses this gap: it proposes differentially private versions of these mechanisms and derives their privacy bounds.
The results show that, contrary to popular beliefs, traditional differential privacy techniques may be superior in terms of accuracy and fairness to differential private counterparts of widely used DA mechanisms.
arXiv Detail & Related papers (2023-01-28T13:58:25Z) - Differential Privacy and Fairness in Decisions and Learning Tasks: A
Survey [50.90773979394264]
It reviews the conditions under which privacy and fairness may have aligned or contrasting goals.
It analyzes how and why DP may exacerbate bias and unfairness in decision problems and learning tasks.
arXiv Detail & Related papers (2022-02-16T16:50:23Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.