Metric-Free Individual Fairness with Cooperative Contextual Bandits
- URL: http://arxiv.org/abs/2011.06738v1
- Date: Fri, 13 Nov 2020 03:10:35 GMT
- Title: Metric-Free Individual Fairness with Cooperative Contextual Bandits
- Authors: Qian Hu, Huzefa Rangwala
- Abstract summary: Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
- Score: 17.985752744098267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Data mining algorithms are increasingly used in automated decision making
across all walks of daily life. Unfortunately, as reported in several studies
these algorithms inject bias from data and environment leading to inequitable
and unfair solutions. To mitigate bias in machine learning, different
formalizations of fairness have been proposed that can be categorized into
group fairness and individual fairness. Group fairness requires that different
groups should be treated similarly which might be unfair to some individuals
within a group. On the other hand, individual fairness requires that similar
individuals be treated similarly. However, individual fairness remains
understudied due to its reliance on problem-specific similarity metrics. We
propose a metric-free individual fairness and a cooperative contextual bandits
(CCB) algorithm. The CCB algorithm utilizes fairness as a reward and attempts
to maximize it. The advantage of treating fairness as a reward is that the
fairness criterion does not need to be differentiable. The proposed algorithm
is tested on multiple real-world benchmark datasets. The results show the
effectiveness of the proposed algorithm at mitigating bias and at achieving
both individual and group fairness.
Related papers
- Counterpart Fairness -- Addressing Systematic between-group Differences in Fairness Evaluation [17.495053606192375]
When using machine learning to aid decision-making, it is critical to ensure that an algorithmic decision is fair and does not discriminate against specific individuals/groups.
Existing group fairness methods aim to ensure equal outcomes across groups delineated by protected variables like race or gender.
In cases where systematic differences between groups play a significant role in outcomes, these methods may overlook the influence of non-protected variables.
arXiv Detail & Related papers (2023-05-29T15:41:12Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Within-group fairness: A guidance for more sound between-group fairness [1.675857332621569]
We introduce a new concept of fairness so-called within-group fairness.
We develop learning algorithms to control within-group fairness and between-group fairness simultaneously.
Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness.
arXiv Detail & Related papers (2023-01-20T00:39:19Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Minimax Group Fairness: Algorithms and Experiments [18.561824632836405]
We provide provably convergent oracle-efficient learning algorithms for minimax group fairness.
Our algorithms apply to both regression and classification settings.
We show empirical cases in which minimax fairness is strictly and strongly preferable to equal outcome notions.
arXiv Detail & Related papers (2020-11-05T21:42:56Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - Beyond Individual and Group Fairness [90.4666341812857]
We present a new data-driven model of fairness that is guided by the unfairness complaints received by the system.
Our model supports multiple fairness criteria and takes into account their potential incompatibilities.
arXiv Detail & Related papers (2020-08-21T14:14:44Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.