Fair Tree Learning
- URL: http://arxiv.org/abs/2110.09295v1
- Date: Mon, 18 Oct 2021 13:40:25 GMT
- Title: Fair Tree Learning
- Authors: Ant\'onio Pereira Barata, Cor J. Veenman
- Abstract summary: Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
- Score: 0.15229257192293202
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: When dealing with sensitive data in automated data-driven decision-making, an
important concern is to learn predictors with high performance towards a class
label, whilst minimising for the discrimination towards some sensitive
attribute, like gender or race, induced from biased data. Various hybrid
optimisation criteria exist which combine classification performance with a
fairness metric. However, while the threshold-free ROC-AUC is the standard for
measuring traditional classification model performance, current fair decision
tree methods only optimise for a fixed threshold on both the classification
task as well as the fairness metric. Moreover, current tree learning frameworks
do not allow for fair treatment with respect to multiple categories or multiple
sensitive attributes. Lastly, the end-users of a fair model should be able to
balance fairness and classification performance according to their specific
ethical, legal, and societal needs. In this paper we address these shortcomings
by proposing a threshold-independent fairness metric termed uniform demographic
parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion
AUC for Fairness -- towards fair decision tree learning, which extends to
bagged and boosted frameworks. Compared to the state-of-the-art, our method
provides three main advantages: (1) classifier performance and fairness are
defined continuously instead of relying upon an, often arbitrary, decision
threshold; (2) it leverages multiple sensitive attributes simultaneously, of
which the values may be multicategorical; and (3) the unavoidable
performance-fairness trade-off is tunable during learning. In our experiments,
we demonstrate how SCAFF attains high predictive performance towards the class
label and low discrimination with respect to binary, multicategorical, and
multiple sensitive attributes, further substantiating our claims.
Related papers
- Uncertainty-Aware Fairness-Adaptive Classification Trees [0.0]
This paper introduces a new classification tree algorithm using a novel splitting criterion that incorporates fairness adjustments into the tree-building process.
We show that our method effectively reduces discriminatory predictions compared to traditional classification trees, without significant loss in overall accuracy.
arXiv Detail & Related papers (2024-10-08T08:42:12Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Learning Fair Classifiers via Min-Max F-divergence Regularization [13.81078324883519]
We introduce a novel min-max F-divergence regularization framework for learning fair classification models.
We show that F-divergence measures possess convexity and differentiability properties.
We show that the proposed framework achieves state-of-the-art performance with respect to the trade-off between accuracy and fairness.
arXiv Detail & Related papers (2023-06-28T20:42:04Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Fairness via Adversarial Attribute Neighbourhood Robust Learning [49.93775302674591]
We propose a principled underlineRobust underlineAdversarial underlineAttribute underlineNeighbourhood (RAAN) loss to debias the classification head.
arXiv Detail & Related papers (2022-10-12T23:39:28Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Fair Classification with Adversarial Perturbations [35.030329189029246]
We study fair classification in the presence of an omniscient adversary that, given an $eta$, is allowed to choose an arbitrary $eta$-fraction of the training samples and arbitrarily perturb their protected attributes.
Our main contribution is an optimization framework to learn fair classifiers in this adversarial setting that comes with provable guarantees on accuracy and fairness.
We prove near-tightness of our framework's guarantees for natural hypothesis classes: no algorithm can have significantly better accuracy and any algorithm with better fairness must have lower accuracy.
arXiv Detail & Related papers (2021-06-10T17:56:59Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Addressing Fairness in Classification with a Model-Agnostic
Multi-Objective Algorithm [33.145522561104464]
The goal of fairness in classification is to learn a classifier that does not discriminate against groups of individuals based on sensitive attributes, such as race and gender.
One approach to designing fair algorithms is to use relaxations of fairness notions as regularization terms.
We leverage this property to define a differentiable relaxation that approximates fairness notions provably better than existing relaxations.
arXiv Detail & Related papers (2020-09-09T17:40:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.