The Impact of Differential Privacy on Group Disparity Mitigation
- URL: http://arxiv.org/abs/2203.02745v1
- Date: Sat, 5 Mar 2022 13:55:05 GMT
- Title: The Impact of Differential Privacy on Group Disparity Mitigation
- Authors: Victor Petr\'en Bach Hansen, Atula Tejaswi Neerkaje, Ramit Sawhney,
Lucie Flek, Anders S{\o}gaard
- Abstract summary: We evaluate the impact of differential privacy on fairness across four tasks.
We train $(varepsilon,delta)$-differentially private models with empirical risk minimization.
We find that differential privacy increases between-group performance differences in the baseline setting.
But differential privacy reduces between-group performance differences in the robust setting.
- Score: 28.804933301007644
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The performance cost of differential privacy has, for some applications, been
shown to be higher for minority groups; fairness, conversely, has been shown to
disproportionally compromise the privacy of members of such groups. Most work
in this area has been restricted to computer vision and risk assessment. In
this paper, we evaluate the impact of differential privacy on fairness across
four tasks, focusing on how attempts to mitigate privacy violations and
between-group performance differences interact: Does privacy inhibit attempts
to ensure fairness? To this end, we train $(\varepsilon,\delta)$-differentially
private models with empirical risk minimization and group distributionally
robust training objectives. Consistent with previous findings, we find that
differential privacy increases between-group performance differences in the
baseline setting; but more interestingly, differential privacy reduces
between-group performance differences in the robust setting. We explain this by
reinterpreting differential privacy as regularization.
Related papers
- Masked Differential Privacy [64.32494202656801]
We propose an effective approach called masked differential privacy (DP), which allows for controlling sensitive regions where differential privacy is applied.
Our method operates selectively on data and allows for defining non-sensitive-temporal regions without DP application or combining differential privacy with other privacy techniques within data samples.
arXiv Detail & Related papers (2024-10-22T15:22:53Z) - Privacy at a Price: Exploring its Dual Impact on AI Fairness [24.650648702853903]
We show that differential privacy in machine learning models can unequally impact separate demographic subgroups regarding prediction accuracy.
This leads to a fairness concern, and manifests as biased performance.
implementing gradient clipping in the differentially private gradient descent ML method can mitigate the negative impact of DP noise on fairness.
arXiv Detail & Related papers (2024-04-15T00:23:41Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - On Differentially Private Online Predictions [74.01773626153098]
We introduce an interactive variant of joint differential privacy towards handling online processes.
We demonstrate that it satisfies (suitable variants) of group privacy, composition, and post processing.
We then study the cost of interactive joint privacy in the basic setting of online classification.
arXiv Detail & Related papers (2023-02-27T19:18:01Z) - Differential Privacy has Bounded Impact on Fairness in Classification [7.022004731560844]
We study the impact of differential privacy on fairness in classification.
We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model.
arXiv Detail & Related papers (2022-10-28T16:19:26Z) - Algorithms with More Granular Differential Privacy Guarantees [65.3684804101664]
We consider partial differential privacy (DP), which allows quantifying the privacy guarantee on a per-attribute basis.
In this work, we study several basic data analysis and learning tasks, and design algorithms whose per-attribute privacy parameter is smaller that the best possible privacy parameter for the entire record of a person.
arXiv Detail & Related papers (2022-09-08T22:43:50Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - Differentially Private Deep Learning under the Fairness Lens [34.28936739262812]
Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems.
It allows to measure and bound the risk associated with an individual participation in a computation.
It was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals.
arXiv Detail & Related papers (2021-06-04T19:10:09Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Private Reinforcement Learning with PAC and Regret Guarantees [69.4202374491817]
We design privacy preserving exploration policies for episodic reinforcement learning (RL)
We first provide a meaningful privacy formulation using the notion of joint differential privacy (JDP)
We then develop a private optimism-based learning algorithm that simultaneously achieves strong PAC and regret bounds, and enjoys a JDP guarantee.
arXiv Detail & Related papers (2020-09-18T20:18:35Z) - Removing Disparate Impact of Differentially Private Stochastic Gradient
Descent on Model Accuracy [18.69118059633505]
When we enforce differential privacy in machine learning, the utility-privacy trade-off is different w.r.t. each group.
In this work, we analyze the inequality in utility loss by differential privacy and propose a modified differentially private gradient descent (DPSGD)
Our experimental evaluation shows how group sample size and group clipping bias affect the impact of differential privacy in DPSGD, and how adaptive clipping for each group helps to mitigate the disparate impact caused by differential privacy in DPSGDF.
arXiv Detail & Related papers (2020-03-08T02:06:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.