Differentially Private Deep Learning under the Fairness Lens
- URL: http://arxiv.org/abs/2106.02674v1
- Date: Fri, 4 Jun 2021 19:10:09 GMT
- Title: Differentially Private Deep Learning under the Fairness Lens
- Authors: Cuong Tran, My H. Dinh, Ferdinando Fioretto
- Abstract summary: Differential Privacy (DP) is an important privacy-enhancing technology for private machine learning systems.
It allows to measure and bound the risk associated with an individual participation in a computation.
It was recently observed that DP learning systems may exacerbate bias and unfairness for different groups of individuals.
- Score: 34.28936739262812
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Differential Privacy (DP) is an important privacy-enhancing technology for
private machine learning systems. It allows to measure and bound the risk
associated with an individual participation in a computation. However, it was
recently observed that DP learning systems may exacerbate bias and unfairness
for different groups of individuals. This paper builds on these important
observations and sheds light on the causes of the disparate impacts arising in
the problem of differentially private empirical risk minimization. It focuses
on the accuracy disparity arising among groups of individuals in two
well-studied DP learning methods: output perturbation and differentially
private stochastic gradient descent. The paper analyzes which data and model
properties are responsible for the disproportionate impacts, why these aspects
are affecting different groups disproportionately and proposes guidelines to
mitigate these effects. The proposed approach is evaluated on several datasets
and settings.
Related papers
- A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results [5.618541935188389]
Differential privacy (DP) is the predominant solution for privacy-preserving Machine learning (ML) algorithms.
Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals.
We study how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions.
arXiv Detail & Related papers (2024-05-23T15:54:03Z) - Privacy at a Price: Exploring its Dual Impact on AI Fairness [24.650648702853903]
We show that differential privacy in machine learning models can unequally impact separate demographic subgroups regarding prediction accuracy.
This leads to a fairness concern, and manifests as biased performance.
implementing gradient clipping in the differentially private gradient descent ML method can mitigate the negative impact of DP noise on fairness.
arXiv Detail & Related papers (2024-04-15T00:23:41Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Post-processing of Differentially Private Data: A Fairness Perspective [53.29035917495491]
This paper shows that post-processing causes disparate impacts on individuals or groups.
It analyzes two critical settings: the release of differentially private datasets and the use of such private datasets for downstream decisions.
It proposes a novel post-processing mechanism that is (approximately) optimal under different fairness metrics.
arXiv Detail & Related papers (2022-01-24T02:45:03Z) - Partial sensitivity analysis in differential privacy [58.730520380312676]
We investigate the impact of each input feature on the individual's privacy loss.
We experimentally evaluate our approach on queries over private databases.
We also explore our findings in the context of neural network training on synthetic data.
arXiv Detail & Related papers (2021-09-22T08:29:16Z) - A Fairness Analysis on Private Aggregation of Teacher Ensembles [31.388212637482365]
The Private Aggregation of Teacher Ensembles (PATE) is an important private machine learning framework.
This paper asks whether this privacy-preserving framework introduces or exacerbates bias and unfairness.
It shows that PATE can introduce accuracy disparity among individuals and groups of individuals.
arXiv Detail & Related papers (2021-09-17T16:19:24Z) - Gradient Masking and the Underestimated Robustness Threats of
Differential Privacy in Deep Learning [0.0]
This paper experimentally evaluates the impact of training with Differential Privacy (DP) on model vulnerability against a broad range of adversarial attacks.
The results suggest that private models are less robust than their non-private counterparts, and that adversarial examples transfer better among DP models than between non-private and private ones.
arXiv Detail & Related papers (2021-05-17T16:10:54Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z) - Differentially Private and Fair Deep Learning: A Lagrangian Dual
Approach [54.32266555843765]
This paper studies a model that protects the privacy of the individuals sensitive information while also allowing it to learn non-discriminatory predictors.
The method relies on the notion of differential privacy and the use of Lagrangian duality to design neural networks that can accommodate fairness constraints.
arXiv Detail & Related papers (2020-09-26T10:50:33Z) - Neither Private Nor Fair: Impact of Data Imbalance on Utility and
Fairness in Differential Privacy [5.416049433853457]
We study how different levels of imbalance in the data affect the accuracy and the fairness of the decisions made by the model.
We demonstrate that even small imbalances and loose privacy guarantees can cause disparate impacts.
arXiv Detail & Related papers (2020-09-10T18:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.