Bursting the Burden Bubble? An Assessment of Sharma et al.'s
Counterfactual-based Fairness Metric
- URL: http://arxiv.org/abs/2211.11512v1
- Date: Mon, 21 Nov 2022 14:54:45 GMT
- Title: Bursting the Burden Bubble? An Assessment of Sharma et al.'s
Counterfactual-based Fairness Metric
- Authors: Yochem van Rosmalen, Florian van der Steen, Sebastiaan Jans, Daan van
der Weijden
- Abstract summary: We show that Burden can show unfairness where statistical parity can not, and that the two metrics can even disagree on which group is treated unfairly.
We conclude that Burden is a valuable metric, but does not replace statistical parity: it rather is valuable to use both.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Machine learning has seen an increase in negative publicity in recent years,
due to biased, unfair, and uninterpretable models. There is a rising interest
in making machine learning models more fair for unprivileged communities, such
as women or people of color. Metrics are needed to evaluate the fairness of a
model. A novel metric for evaluating fairness between groups is Burden, which
uses counterfactuals to approximate the average distance of negatively
classified individuals in a group to the decision boundary of the model. The
goal of this study is to compare Burden to statistical parity, a well-known
fairness metric, and discover Burden's advantages and disadvantages. We do this
by calculating the Burden and statistical parity of a sensitive attribute in
three datasets: two synthetic datasets are created to display differences
between the two metrics, and one real-world dataset is used. We show that
Burden can show unfairness where statistical parity can not, and that the two
metrics can even disagree on which group is treated unfairly. We conclude that
Burden is a valuable metric, but does not replace statistical parity: it rather
is valuable to use both.
Related papers
- "Patriarchy Hurts Men Too." Does Your Model Agree? A Discussion on Fairness Assumptions [3.706222947143855]
In the context of group fairness, this approach often obscures implicit assumptions about how bias is introduced into the data.
We are assuming that the biasing process is a monotonic function of the fair scores, dependent solely on the sensitive attribute.
Either the behavior of the biasing process is more complex than mere monotonicity, which means we need to identify and reject our implicit assumptions.
arXiv Detail & Related papers (2024-08-01T07:06:30Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Gender Biases in Automatic Evaluation Metrics for Image Captioning [87.15170977240643]
We conduct a systematic study of gender biases in model-based evaluation metrics for image captioning tasks.
We demonstrate the negative consequences of using these biased metrics, including the inability to differentiate between biased and unbiased generations.
We present a simple and effective way to mitigate the metric bias without hurting the correlations with human judgments.
arXiv Detail & Related papers (2023-05-24T04:27:40Z) - Non-Invasive Fairness in Learning through the Lens of Data Drift [88.37640805363317]
We show how to improve the fairness of Machine Learning models without altering the data or the learning algorithm.
We use a simple but key insight: the divergence of trends between different populations, and, consecutively, between a learned model and minority populations, is analogous to data drift.
We explore two strategies (model-splitting and reweighing) to resolve this drift, aiming to improve the overall conformance of models to the underlying data.
arXiv Detail & Related papers (2023-03-30T17:30:42Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - On Fairness and Stability: Is Estimator Variance a Friend or a Foe? [6.751310968561177]
We propose a new family of performance measures based on group-wise parity in variance.
We develop and release an open-source library that reconciles uncertainty quantification techniques with fairness analysis.
arXiv Detail & Related papers (2023-02-09T09:35:36Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Convex Fairness Constrained Model Using Causal Effect Estimators [6.414055487487486]
We devise novel models, called FairCEEs, which remove discrimination while keeping explanatory bias.
We provide an efficient algorithm for solving FairCEEs in regression and binary classification tasks.
arXiv Detail & Related papers (2020-02-16T03:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.