Two Simple Ways to Learn Individual Fairness Metrics from Data
- URL: http://arxiv.org/abs/2006.11439v1
- Date: Fri, 19 Jun 2020 23:47:15 GMT
- Title: Two Simple Ways to Learn Individual Fairness Metrics from Data
- Authors: Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
- Abstract summary: Individual fairness is an intuitive definition of algorithmic fairness that addresses some of the drawbacks of group fairness.
The lack of a widely accepted fair metric for many ML tasks is the main barrier to broader adoption of individual fairness.
We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases.
- Score: 47.6390279192406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Individual fairness is an intuitive definition of algorithmic fairness that
addresses some of the drawbacks of group fairness. Despite its benefits, it
depends on a task specific fair metric that encodes our intuition of what is
fair and unfair for the ML task at hand, and the lack of a widely accepted fair
metric for many ML tasks is the main barrier to broader adoption of individual
fairness. In this paper, we present two simple ways to learn fair metrics from
a variety of data types. We show empirically that fair training with the
learned metrics leads to improved fairness on three machine learning tasks
susceptible to gender and racial biases. We also provide theoretical guarantees
on the statistical performance of both approaches.
Related papers
- Intrinsic Fairness-Accuracy Tradeoffs under Equalized Odds [8.471466670802817]
We study the tradeoff between fairness and accuracy under the statistical notion of equalized odds.
We present a new upper bound on the accuracy as a function of the fairness budget.
Our results show that achieving high accuracy subject to a low-bias could be fundamentally limited based on the statistical disparity across the groups.
arXiv Detail & Related papers (2024-05-12T23:15:21Z) - Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Fair Few-shot Learning with Auxiliary Sets [53.30014767684218]
In many machine learning (ML) tasks, only very few labeled data samples can be collected, which can lead to inferior fairness performance.
In this paper, we define the fairness-aware learning task with limited training samples as the emphfair few-shot learning problem.
We devise a novel framework that accumulates fairness-aware knowledge across different meta-training tasks and then generalizes the learned knowledge to meta-test tasks.
arXiv Detail & Related papers (2023-08-28T06:31:37Z) - Towards Better Fairness-Utility Trade-off: A Comprehensive
Measurement-Based Reinforcement Learning Framework [7.8940121707748245]
How to ensure machine learning's fairness while maintaining its utility is a challenging but crucial issue.
We propose CFU (Comprehensive Fairness-Utility), a reinforcement learning-based framework, to efficiently improve the fairness-utility trade-off.
CFU outperforms all state-of-the-art techniques and has witnessed a 37.5% improvement on average.
arXiv Detail & Related papers (2023-07-21T06:34:41Z) - Practical Approaches for Fair Learning with Multitype and Multivariate
Sensitive Attributes [70.6326967720747]
It is important to guarantee that machine learning algorithms deployed in the real world do not result in unfairness or unintended social consequences.
We introduce FairCOCCO, a fairness measure built on cross-covariance operators on reproducing kernel Hilbert Spaces.
We empirically demonstrate consistent improvements against state-of-the-art techniques in balancing predictive power and fairness on real-world datasets.
arXiv Detail & Related papers (2022-11-11T11:28:46Z) - Identifying, measuring, and mitigating individual unfairness for
supervised learning models and application to credit risk models [3.818578543491318]
We focus on identifying and mitigating individual unfairness in AI solutions.
We also investigate the extent to which techniques for achieving individual fairness are effective at achieving group fairness.
Some experimental results corresponding to the individual unfairness mitigation techniques are presented.
arXiv Detail & Related papers (2022-11-11T10:20:46Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - A Systematic Approach to Group Fairness in Automated Decision Making [0.0]
The goal of this paper is to provide data scientists with an accessible introduction to group fairness metrics.
We will do this by considering in which sense socio-demographic groups are compared for making a statement on fairness.
arXiv Detail & Related papers (2021-09-09T12:47:15Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.