FACT: A Diagnostic for Group Fairness Trade-offs
- URL: http://arxiv.org/abs/2004.03424v3
- Date: Tue, 7 Jul 2020 17:34:11 GMT
- Title: FACT: A Diagnostic for Group Fairness Trade-offs
- Authors: Joon Sik Kim, Jiahao Chen, Ameet Talwalkar
- Abstract summary: Group fairness is a class of fairness notions that measure how different groups of individuals are treated differently according to their protected attributes.
We propose a general diagnostic that enables systematic characterization of these trade-offs in group fairness.
- Score: 23.358566041117083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group fairness, a class of fairness notions that measure how different groups
of individuals are treated differently according to their protected attributes,
has been shown to conflict with one another, often with a necessary cost in
loss of model's predictive performance. We propose a general diagnostic that
enables systematic characterization of these trade-offs in group fairness. We
observe that the majority of group fairness notions can be expressed via the
fairness-confusion tensor, which is the confusion matrix split according to the
protected attribute values. We frame several optimization problems that
directly optimize both accuracy and fairness objectives over the elements of
this tensor, which yield a general perspective for understanding multiple
trade-offs including group fairness incompatibilities. It also suggests an
alternate post-processing method for designing fair classifiers. On synthetic
and real datasets, we demonstrate the use cases of our diagnostic, particularly
on understanding the trade-off landscape between accuracy and fairness.
Related papers
- Fair Distillation: Teaching Fairness from Biased Teachers in Medical Imaging [16.599189934420885]
We propose the Fair Distillation (FairDi) method to address fairness concerns in deep learning.
We show that FairDi achieves significant gains in both overall and group-specific accuracy, along with improved fairness, compared to existing methods.
FairDi is adaptable to various medical tasks, such as classification and segmentation, and provides an effective solution for equitable model performance.
arXiv Detail & Related papers (2024-11-18T16:50:34Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - Fair-CDA: Continuous and Directional Augmentation for Group Fairness [48.84385689186208]
We propose a fine-grained data augmentation strategy for imposing fairness constraints.
We show that group fairness can be achieved by regularizing the models on transition paths of sensitive features between groups.
Our proposed method does not assume any data generative model and ensures good generalization for both accuracy and fairness.
arXiv Detail & Related papers (2023-04-01T11:23:00Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - FairAdaBN: Mitigating unfairness with adaptive batch normalization and
its application to dermatological disease classification [14.589159162086926]
We propose FairAdaBN, which makes batch normalization adaptive to sensitive attribute.
We propose a new metric, named Fairness-Accuracy Trade-off Efficiency (FATE), to compute normalized fairness improvement over accuracy drop.
Experiments on two dermatological datasets show that our proposed method outperforms other methods on fairness criteria and FATE.
arXiv Detail & Related papers (2023-03-15T02:22:07Z) - Learning Informative Representation for Fairness-aware Multivariate
Time-series Forecasting: A Group-based Perspective [50.093280002375984]
Performance unfairness among variables widely exists in multivariate time series (MTS) forecasting models.
We propose a novel framework, named FairFor, for fairness-aware MTS forecasting.
arXiv Detail & Related papers (2023-01-27T04:54:12Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Fair Mixup: Fairness via Interpolation [28.508444261249423]
We propose fair mixup, a new data augmentation strategy for imposing the fairness constraint.
We show that fairness can be achieved by regularizing the models on paths of interpolated samples between the groups.
We empirically show that it ensures a better generalization for both accuracy and fairness measurement in benchmarks.
arXiv Detail & Related papers (2021-03-11T06:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.