A Simple and Practical Method for Reducing the Disparate Impact of
Differential Privacy
- URL: http://arxiv.org/abs/2312.11712v1
- Date: Mon, 18 Dec 2023 21:19:35 GMT
- Title: A Simple and Practical Method for Reducing the Disparate Impact of
Differential Privacy
- Authors: Lucas Rosenblatt, Julia Stoyanovich, Christopher Musco
- Abstract summary: Differentially private (DP) mechanisms have been deployed in a variety of high-impact social settings.
The impact of DP on utility can vary significantly among different sub-populations.
A simple way to reduce this disparity is with stratification.
- Score: 21.098175634158043
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Differentially private (DP) mechanisms have been deployed in a variety of
high-impact social settings (perhaps most notably by the U.S. Census). Since
all DP mechanisms involve adding noise to results of statistical queries, they
are expected to impact our ability to accurately analyze and learn from data,
in effect trading off privacy with utility. Alarmingly, the impact of DP on
utility can vary significantly among different sub-populations. A simple way to
reduce this disparity is with stratification. First compute an independent
private estimate for each group in the data set (which may be the intersection
of several protected classes), then, to compute estimates of global statistics,
appropriately recombine these group estimates. Our main observation is that
naive stratification often yields high-accuracy estimates of population-level
statistics, without the need for additional privacy budget. We support this
observation theoretically and empirically. Our theoretical results center on
the private mean estimation problem, while our empirical results center on
extensive experiments on private data synthesis to demonstrate the
effectiveness of stratification on a variety of private mechanisms. Overall, we
argue that this straightforward approach provides a strong baseline against
which future work on reducing utility disparities of DP mechanisms should be
compared.
Related papers
- Stratified Prediction-Powered Inference for Hybrid Language Model Evaluation [62.2436697657307]
Prediction-powered inference (PPI) is a method that improves statistical estimates based on limited human-labeled data.
We propose a method called Stratified Prediction-Powered Inference (StratPPI)
We show that the basic PPI estimates can be considerably improved by employing simple data stratification strategies.
arXiv Detail & Related papers (2024-06-06T17:37:39Z) - A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results [5.618541935188389]
Differential privacy (DP) is the predominant solution for privacy-preserving Machine learning (ML) algorithms.
Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals.
We study how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions.
arXiv Detail & Related papers (2024-05-23T15:54:03Z) - Reduced-Rank Multi-objective Policy Learning and Optimization [57.978477569678844]
In practice, causal researchers do not have a single outcome in mind a priori.
In government-assisted social benefit programs, policymakers collect many outcomes to understand the multidimensional nature of poverty.
We present a data-driven dimensionality-reduction methodology for multiple outcomes in the context of optimal policy learning.
arXiv Detail & Related papers (2024-04-29T08:16:30Z) - Privacy Amplification for the Gaussian Mechanism via Bounded Support [64.86780616066575]
Data-dependent privacy accounting frameworks such as per-instance differential privacy (pDP) and Fisher information loss (FIL) confer fine-grained privacy guarantees for individuals in a fixed training dataset.
We propose simple modifications of the Gaussian mechanism with bounded support, showing that they amplify privacy guarantees under data-dependent accounting.
arXiv Detail & Related papers (2024-03-07T21:22:07Z) - Federated Experiment Design under Distributed Differential Privacy [31.06808163362162]
We focus on the rigorous protection of users' privacy while minimizing the trust toward service providers.
Although a vital component in modern A/B testing, private distributed experimentation has not previously been studied.
We show how these mechanisms can be scaled up to handle the very large number of participants commonly found in practice.
arXiv Detail & Related papers (2023-11-07T22:38:56Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - DP2-Pub: Differentially Private High-Dimensional Data Publication with
Invariant Post Randomization [58.155151571362914]
We propose a differentially private high-dimensional data publication mechanism (DP2-Pub) that runs in two phases.
splitting attributes into several low-dimensional clusters with high intra-cluster cohesion and low inter-cluster coupling helps obtain a reasonable privacy budget.
We also extend our DP2-Pub mechanism to the scenario with a semi-honest server which satisfies local differential privacy.
arXiv Detail & Related papers (2022-08-24T17:52:43Z) - Differentially Private Estimation of Heterogeneous Causal Effects [9.355532300027727]
We introduce a general meta-algorithm for estimating conditional average treatment effects (CATE) with differential privacy guarantees.
Our meta-algorithm can work with simple, single-stage CATE estimators such as S-learner and more complex multi-stage estimators such as DR and R-learner.
arXiv Detail & Related papers (2022-02-22T17:21:18Z) - Post-processing of Differentially Private Data: A Fairness Perspective [53.29035917495491]
This paper shows that post-processing causes disparate impacts on individuals or groups.
It analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions.
It proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics.
arXiv Detail & Related papers (2022-01-24T02:45:03Z) - Differential privacy and robust statistics in high dimensions [49.50869296871643]
High-dimensional Propose-Test-Release (HPTR) builds upon three crucial components: the exponential mechanism, robust statistics, and the Propose-Test-Release mechanism.
We show that HPTR nearly achieves the optimal sample complexity under several scenarios studied in the literature.
arXiv Detail & Related papers (2021-11-12T06:36:40Z) - Combining Public and Private Data [7.975795748574989]
We introduce a mixed estimator of the mean optimized to minimize variance.
We argue that our mechanism is preferable to techniques that preserve the privacy of individuals by subsampling data proportionally to the privacy needs of users.
arXiv Detail & Related papers (2021-10-29T23:25:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.