Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes
- URL: http://arxiv.org/abs/2211.06138v1
- Date: Fri, 11 Nov 2022 11:28:46 GMT
- Title: Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes
- Authors: Tennison Liu, Alex J. Chan, Boris van Breugel, Mihaela van der Schaar
- Abstract summary: It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
- Score: 70.6326967720747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: It is important to guarantee that machine learning algorithms deployed in the
real world do not result in unfairness or unintended social consequences. Fair
ML has largely focused on the protection of single attributes in the simpler
setting where both attributes and target outcomes are binary. However, the
practical application in many a real-world problem entails the simultaneous
protection of multiple sensitive attributes, which are often not simply binary,
but continuous or categorical. To address this more challenging task, we
introduce FairCOCCO, a fairness measure built on cross-covariance operators on
reproducing kernel Hilbert Spaces. This leads to two practical tools: first,
the FairCOCCO Score, a normalised metric that can quantify fairness in settings
with single or multiple sensitive attributes of arbitrary type; and second, a
subsequent regularisation term that can be incorporated into arbitrary learning
objectives to obtain fair predictors. These contributions address crucial gaps
in the algorithmic fairness literature, and we empirically demonstrate
consistent improvements against state-of-the-art techniques in balancing
predictive power and fairness on real-world datasets.
Related papers
- Probably Approximately Precision and Recall Learning [62.912015491907994]
Precision and Recall are foundational metrics in machine learning.
One-sided feedback--where only positive examples are observed during training--is inherent in many practical problems.
We introduce a PAC learning framework where each hypothesis is represented by a graph, with edges indicating positive interactions.
arXiv Detail & Related papers (2024-11-20T04:21:07Z) - Flexible Fairness-Aware Learning via Inverse Conditional Permutation [0.0]
We introduce an in-processing fairness-aware learning approach, FairICP, which integrates adversarial learning with a novel inverse conditional permutation scheme.
We show that FairICP offers a theoretically justified, flexible, and efficient scheme to promote equalized odds under fairness conditions described by complex and multidimensional sensitive attributes.
arXiv Detail & Related papers (2024-04-08T16:57:44Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Normalise for Fairness: A Simple Normalisation Technique for Fairness in Regression Machine Learning Problems [46.93320580613236]
We present a simple, yet effective method based on normalisation (FaiReg) for regression problems.
We compare it with two standard methods for fairness, namely data balancing and adversarial training.
The results show the superior performance of diminishing the effects of unfairness better than data balancing.
arXiv Detail & Related papers (2022-02-02T12:26:25Z) - Fair Tree Learning [0.15229257192293202]
Various optimisation criteria combine classification performance with a fairness metric.
Current fair decision tree methods only optimise for a fixed threshold on both the classification task as well as the fairness metric.
We propose a threshold-independent fairness metric termed uniform demographic parity, and a derived splitting criterion entitled SCAFF -- Splitting Criterion AUC for Fairness.
arXiv Detail & Related papers (2021-10-18T13:40:25Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Fair Meta-Learning For Few-Shot Classification [7.672769260569742]
A machine learning algorithm trained on biased data tends to make unfair predictions.
We propose a novel fair fast-adapted few-shot meta-learning approach that efficiently mitigates biases during meta-train.
We empirically demonstrate that our proposed approach efficiently mitigates biases on model output and generalizes both accuracy and fairness to unseen tasks.
arXiv Detail & Related papers (2020-09-23T22:33:47Z) - Causal Feature Selection for Algorithmic Fairness [61.767399505764736]
We consider fairness in the integration component of data management.
We propose an approach to identify a sub-collection of features that ensure the fairness of the dataset.
arXiv Detail & Related papers (2020-06-10T20:20:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.