Measuring Fairness of Text Classifiers via Prediction Sensitivity
- URL: http://arxiv.org/abs/2203.08670v1
- Date: Wed, 16 Mar 2022 15:00:33 GMT
- Title: Measuring Fairness of Text Classifiers via Prediction Sensitivity
- Authors: Satyapriya Krishna, Rahul Gupta, Apurv Verma, Jwala Dhamala, Yada
Pruksachatkun, Kai-Wei Chang
- Abstract summary: ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
- Score: 63.56554964580627
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the rapid growth in language processing applications, fairness has
emerged as an important consideration in data-driven solutions. Although
various fairness definitions have been explored in the recent literature, there
is lack of consensus on which metrics most accurately reflect the fairness of a
system. In this work, we propose a new formulation : ACCUMULATED PREDICTION
SENSITIVITY, which measures fairness in machine learning models based on the
model's prediction sensitivity to perturbations in input features. The metric
attempts to quantify the extent to which a single prediction depends on a
protected attribute, where the protected attribute encodes the membership
status of an individual in a protected group. We show that the metric can be
theoretically linked with a specific notion of group fairness (statistical
parity) and individual fairness. It also correlates well with humans'
perception of fairness. We conduct experiments on two text classification
datasets : JIGSAW TOXICITY, and BIAS IN BIOS, and evaluate the correlations
between metrics and manual annotations on whether the model produced a fair
outcome. We observe that the proposed fairness metric based on prediction
sensitivity is statistically significantly more correlated with human
annotation than the existing counterfactual fairness metric.
Related papers
- The Fragility of Fairness: Causal Sensitivity Analysis for Fair Machine Learning [34.50562695587344]
We adapt tools from causal sensitivity analysis to the FairML context.
We analyze the sensitivity of the most common parity metrics under 3 varieties of classifier.
We show that causal sensitivity analysis provides a powerful and necessary toolkit for gauging the informativeness of parity metric evaluations.
arXiv Detail & Related papers (2024-10-12T17:28:49Z) - Editable Fairness: Fine-Grained Bias Mitigation in Language Models [52.66450426729818]
We propose a novel debiasing approach, Fairness Stamp (FAST), which enables fine-grained calibration of individual social biases.
FAST surpasses state-of-the-art baselines with superior debiasing performance.
This highlights the potential of fine-grained debiasing strategies to achieve fairness in large language models.
arXiv Detail & Related papers (2024-08-07T17:14:58Z) - Causal Fair Metric: Bridging Causality, Individual Fairness, and
Adversarial Robustness [7.246701762489971]
Adversarial perturbation, used to identify vulnerabilities in models, and individual fairness, aiming for equitable treatment of similar individuals, both depend on metrics to generate comparable input data instances.
Previous attempts to define such joint metrics often lack general assumptions about data or structural causal models and were unable to reflect counterfactual proximity.
This paper introduces a causal fair metric formulated based on causal structures encompassing sensitive attributes and protected causal perturbation.
arXiv Detail & Related papers (2023-10-30T09:53:42Z) - Learning for Counterfactual Fairness from Observational Data [62.43249746968616]
Fairness-aware machine learning aims to eliminate biases of learning models against certain subgroups described by certain protected (sensitive) attributes such as race, gender, and age.
A prerequisite for existing methods to achieve counterfactual fairness is the prior human knowledge of the causal model for the data.
In this work, we address the problem of counterfactually fair prediction from observational data without given causal models by proposing a novel framework CLAIRE.
arXiv Detail & Related papers (2023-07-17T04:08:29Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - On the Intrinsic and Extrinsic Fairness Evaluation Metrics for
Contextualized Language Representations [74.70957445600936]
Multiple metrics have been introduced to measure fairness in various natural language processing tasks.
These metrics can be roughly categorized into two categories: 1) emphextrinsic metrics for evaluating fairness in downstream applications and 2) emphintrinsic metrics for estimating fairness in upstream language representation models.
arXiv Detail & Related papers (2022-03-25T22:17:43Z) - Prediction Sensitivity: Continual Audit of Counterfactual Fairness in
Deployed Classifiers [2.0625936401496237]
Traditional group fairness metrics can miss discrimination against individuals and are difficult to apply after deployment.
We present prediction sensitivity, an approach for continual audit of counterfactual fairness in deployed classifiers.
Our empirical results demonstrate that prediction sensitivity is effective for detecting violations of counterfactual fairness.
arXiv Detail & Related papers (2022-02-09T15:06:45Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Robust Fairness under Covariate Shift [11.151913007808927]
Making predictions that are fair with regard to protected group membership has become an important requirement for classification algorithms.
We propose an approach that obtains the predictor that is robust to the worst-case in terms of target performance.
arXiv Detail & Related papers (2020-10-11T04:42:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.