On the Impact of Output Perturbation on Fairness in Binary Linear
Classification
- URL: http://arxiv.org/abs/2402.03011v1
- Date: Mon, 5 Feb 2024 13:50:08 GMT
- Title: On the Impact of Output Perturbation on Fairness in Binary Linear
Classification
- Authors: Vitalii Emelianov, Micha\"el Perrot
- Abstract summary: We study how differential privacy interacts with both individual and group fairness in binary linear classification.
We derive high-probability bounds on the level of individual and group fairness that the perturbed models can achieve.
- Score: 0.3916094706589679
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We theoretically study how differential privacy interacts with both
individual and group fairness in binary linear classification. More precisely,
we focus on the output perturbation mechanism, a classic approach in
privacy-preserving machine learning. We derive high-probability bounds on the
level of individual and group fairness that the perturbed models can achieve
compared to the original model. Hence, for individual fairness, we prove that
the impact of output perturbation on the level of fairness is bounded but grows
with the dimension of the model. For group fairness, we show that this impact
is determined by the distribution of so-called angular margins, that is signed
margins of the non-private model re-scaled by the norm of each example.
Related papers
- Fairness-Aware Meta-Learning via Nash Bargaining [63.44846095241147]
We introduce a two-stage meta-learning framework to address issues of group-level fairness in machine learning.
The first stage involves the use of a Nash Bargaining Solution (NBS) to resolve hypergradient conflicts and steer the model.
We show empirical effects across various fairness objectives in six key fairness datasets and two image classification tasks.
arXiv Detail & Related papers (2024-06-11T07:34:15Z) - Towards Cohesion-Fairness Harmony: Contrastive Regularization in
Individual Fair Graph Clustering [5.255750357176021]
iFairNMTF is an individual Fairness Nonnegative Matrix Tri-Factorization model with contrastive fairness regularization.
Our model allows for customizable accuracy-fairness trade-offs, thereby enhancing user autonomy.
arXiv Detail & Related papers (2024-02-16T15:25:56Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - FairDP: Certified Fairness with Differential Privacy [59.56441077684935]
This paper introduces FairDP, a novel mechanism designed to achieve certified fairness with differential privacy (DP)
FairDP independently trains models for distinct individual groups, using group-specific clipping terms to assess and bound the disparate impacts of DP.
Extensive theoretical and empirical analyses validate the efficacy of FairDP and improved trade-offs between model utility, privacy, and fairness compared with existing methods.
arXiv Detail & Related papers (2023-05-25T21:07:20Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Differential Privacy has Bounded Impact on Fairness in Classification [7.022004731560844]
We study the impact of differential privacy on fairness in classification.
We prove that, given a class of models, popular group fairness measures are pointwise Lipschitz-continuous with respect to the parameters of the model.
arXiv Detail & Related papers (2022-10-28T16:19:26Z) - Fair Inference for Discrete Latent Variable Models [12.558187319452657]
Machine learning models, trained on data without due care, often exhibit unfair and discriminatory behavior against certain populations.
We develop a fair variational inference technique for the discrete latent variables, which is accomplished by including a fairness penalty on the variational distribution.
To demonstrate the generality of our approach and its potential for real-world impact, we then develop a special-purpose graphical model for criminal justice risk assessments.
arXiv Detail & Related papers (2022-09-15T04:54:21Z) - Characterizing Fairness Over the Set of Good Models Under Selective
Labels [69.64662540443162]
We develop a framework for characterizing predictive fairness properties over the set of models that deliver similar overall performance.
We provide tractable algorithms to compute the range of attainable group-level predictive disparities.
We extend our framework to address the empirically relevant challenge of selectively labelled data.
arXiv Detail & Related papers (2021-01-02T02:11:37Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Fairness by Explicability and Adversarial SHAP Learning [0.0]
We propose a new definition of fairness that emphasises the role of an external auditor and model explicability.
We develop a framework for mitigating model bias using regularizations constructed from the SHAP values of an adversarial surrogate model.
We demonstrate our approaches using gradient and adaptive boosting on: a synthetic dataset, the UCI Adult (Census) dataset and a real-world credit scoring dataset.
arXiv Detail & Related papers (2020-03-11T14:36:34Z) - Counterfactual fairness: removing direct effects through regularization [0.0]
We propose a new definition of fairness that incorporates causality through the Controlled Direct Effect (CDE)
We develop regularizations to tackle classical fairness measures and present a causal regularization that satisfies our new fairness definition.
Our results were found to mitigate unfairness from the predictions with small reductions in model performance.
arXiv Detail & Related papers (2020-02-25T10:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.