Differential Privacy has Bounded Impact on Fairness in Classification
- URL: http://arxiv.org/abs/2210.16242v3
- Date: Mon, 18 Sep 2023 10:25:49 GMT
- Title: Differential Privacy has Bounded Impact on Fairness in Classification
- Authors: Paul Mangold, Micha\"el Perrot, Aur\'elien Bellet, Marc Tommasi
- Abstract summary: We study the impact of differential privacy on fairness in classification.
We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model.
- Score: 7.022004731560844
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We theoretically study the impact of differential privacy on fairness in
classification. We prove that, given a class of models, popular group fairness
measures are pointwise Lipschitz-continuous with respect to the parameters of
the model. This result is a consequence of a more general statement on accuracy
conditioned on an arbitrary event (such as membership to a sensitive group),
which may be of independent interest. We use this Lipschitz property to prove a
non-asymptotic bound showing that, as the number of samples increases, the
fairness level of private models gets closer to the one of their non-private
counterparts. This bound also highlights the importance of the confidence
margin of a model on the disparate impact of differential privacy.
Related papers
- A Systematic and Formal Study of the Impact of Local Differential Privacy on Fairness: Preliminary Results [5.618541935188389]
Differential privacy (DP) is the predominant solution for privacy-preserving Machine learning (ML) algorithms.
Recent experimental studies have shown that local DP can impact ML prediction for different subgroups of individuals.
We study how the fairness of the decisions made by the ML model changes under local DP for different levels of privacy and data distributions.
arXiv Detail & Related papers (2024-05-23T15:54:03Z) - On the Impact of Output Perturbation on Fairness in Binary Linear
Classification [0.3916094706589679]
We study how differential privacy interacts with both individual and group fairness in binary linear classification.
We derive high-probability bounds on the level of individual and group fairness that the perturbed models can achieve.
arXiv Detail & Related papers (2024-02-05T13:50:08Z) - Causal Inference with Differentially Private (Clustered) Outcomes [16.166525280886578]
Estimating causal effects from randomized experiments is only feasible if participants agree to reveal their responses.
We suggest a new differential privacy mechanism, Cluster-DP, which leverages any given cluster structure.
We show that, depending on an intuitive measure of cluster quality, we can improve the variance loss while maintaining our privacy guarantees.
arXiv Detail & Related papers (2023-08-02T05:51:57Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - When Does Differentially Private Learning Not Suffer in High Dimensions? [43.833397016656704]
Large pretrained models can be privately fine-tuned to achieve performance approaching that of non-private models.
This seemingly contradicts known results on the model-size dependence of differentially private convex learning.
arXiv Detail & Related papers (2022-07-01T02:36:51Z) - "You Can't Fix What You Can't Measure": Privately Measuring Demographic
Performance Disparities in Federated Learning [78.70083858195906]
We propose differentially private mechanisms to measure differences in performance across groups while protecting the privacy of group membership.
Our results show that, contrary to what prior work suggested, protecting privacy is not necessarily in conflict with identifying performance disparities of federated models.
arXiv Detail & Related papers (2022-06-24T09:46:43Z) - SF-PATE: Scalable, Fair, and Private Aggregation of Teacher Ensembles [50.90773979394264]
This paper studies a model that protects the privacy of individuals' sensitive information while also allowing it to learn non-discriminatory predictors.
A key characteristic of the proposed model is to enable the adoption of off-the-selves and non-private fair models to create a privacy-preserving and fair model.
arXiv Detail & Related papers (2022-04-11T14:42:54Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - The Impact of Differential Privacy on Group Disparity Mitigation [28.804933301007644]
We evaluate the impact of differential privacy on fairness across four tasks.
We train $(varepsilon,delta)$-differentially private models with empirical risk minimization.
We find that differential privacy increases between-group performance differences in the baseline setting.
But differential privacy reduces between-group performance differences in the robust setting.
arXiv Detail & Related papers (2022-03-05T13:55:05Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Robustness Threats of Differential Privacy [70.818129585404]
We experimentally demonstrate that networks, trained with differential privacy, in some settings might be even more vulnerable in comparison to non-private versions.
We study how the main ingredients of differentially private neural networks training, such as gradient clipping and noise addition, affect the robustness of the model.
arXiv Detail & Related papers (2020-12-14T18:59:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.