How Biased are Your Features?: Computing Fairness Influence Functions
with Global Sensitivity Analysis
- URL: http://arxiv.org/abs/2206.00667v3
- Date: Mon, 3 Jul 2023 00:07:10 GMT
- Title: How Biased are Your Features?: Computing Fairness Influence Functions
with Global Sensitivity Analysis
- Authors: Bishwamittra Ghosh, Debabrota Basu, Kuldeep S. Meel
- Abstract summary: Fairness in machine learning has attained significant focus due to the widespread application in high-stake decision-making tasks.
We introduce the Fairness Influence Function (FIF), which breaks down bias into its components among individual features and the intersection of multiple features.
Experiments demonstrate that FairXplainer captures FIFs of individual feature and intersectional features, provides a better approximation of bias based on FIFs, demonstrates higher correlation of FIFs with fairness interventions, and detects changes in bias due to fairness affirmative/punitive actions in the classifier.
- Score: 38.482411134083236
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Fairness in machine learning has attained significant focus due to the
widespread application in high-stake decision-making tasks. Unregulated machine
learning classifiers can exhibit bias towards certain demographic groups in
data, thus the quantification and mitigation of classifier bias is a central
concern in fairness in machine learning. In this paper, we aim to quantify the
influence of different features in a dataset on the bias of a classifier. To do
this, we introduce the Fairness Influence Function (FIF). This function breaks
down bias into its components among individual features and the intersection of
multiple features. The key idea is to represent existing group fairness metrics
as the difference of the scaled conditional variances in the classifier's
prediction and apply a decomposition of variance according to global
sensitivity analysis. To estimate FIFs, we instantiate an algorithm
FairXplainer that applies variance decomposition of classifier's prediction
following local regression. Experiments demonstrate that FairXplainer captures
FIFs of individual feature and intersectional features, provides a better
approximation of bias based on FIFs, demonstrates higher correlation of FIFs
with fairness interventions, and detects changes in bias due to fairness
affirmative/punitive actions in the classifier.
The code is available at https://github.com/ReAILe/bias-explainer.
Related papers
- Balancing Fairness and Accuracy in Data-Restricted Binary Classification [14.439413517433891]
This paper proposes a framework that models the trade-off between accuracy and fairness under four practical scenarios.
Experiments on three datasets demonstrate the utility of the proposed framework as a tool for quantifying the trade-offs.
arXiv Detail & Related papers (2024-03-12T15:01:27Z) - Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Algorithmic Fairness Verification with Graphical Models [24.8005399877574]
We propose an efficient fairness verifier, called FVGM, that encodes correlations among features as a Bayesian network.
We show that FVGM leads to an accurate and scalable assessment for more diverse families of fairness-enhancing algorithms.
arXiv Detail & Related papers (2021-09-20T12:05:14Z) - Learning Debiased Representation via Disentangled Feature Augmentation [19.348340314001756]
This paper presents an empirical analysis revealing that training with "diverse" bias-conflicting samples is crucial for debiasing.
We propose a novel feature-level data augmentation technique in order to synthesize diverse bias-conflicting samples.
arXiv Detail & Related papers (2021-07-03T08:03:25Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Evaluating Fairness of Machine Learning Models Under Uncertain and
Incomplete Information [25.739240011015923]
We show that the test accuracy of the attribute classifier is not always correlated with its effectiveness in bias estimation for a downstream model.
Our analysis has surprising and counter-intuitive implications where in certain regimes one might want to distribute the error of the attribute classifier as unevenly as possible.
arXiv Detail & Related papers (2021-02-16T19:02:55Z) - Deep F-measure Maximization for End-to-End Speech Understanding [52.36496114728355]
We propose a differentiable approximation to the F-measure and train the network with this objective using standard backpropagation.
We perform experiments on two standard fairness datasets, Adult, Communities and Crime, and also on speech-to-intent detection on the ATIS dataset and speech-to-image concept classification on the Speech-COCO dataset.
In all four of these tasks, F-measure results in improved micro-F1 scores, with absolute improvements of up to 8% absolute, as compared to models trained with the cross-entropy loss function.
arXiv Detail & Related papers (2020-08-08T03:02:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.