Within-group fairness: A guidance for more sound between-group fairness
- URL: http://arxiv.org/abs/2301.08375v1
- Date: Fri, 20 Jan 2023 00:39:19 GMT
- Title: Within-group fairness: A guidance for more sound between-group fairness
- Authors: Sara Kim, Kyusang Yu, Yongdai Kim
- Abstract summary: We introduce a new concept of fairness so-called within-group fairness.
We develop learning algorithms to control within-group fairness and between-group fairness simultaneously.
Numerical studies show that the proposed learning algorithms improve within-group fairness without sacrificing accuracy as well as between-group fairness.
- Score: 1.675857332621569
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As they have a vital effect on social decision-making, AI algorithms not only
should be accurate and but also should not pose unfairness against certain
sensitive groups (e.g., non-white, women). Various specially designed AI
algorithms to ensure trained AI models to be fair between sensitive groups have
been developed. In this paper, we raise a new issue that between-group fair AI
models could treat individuals in a same sensitive group unfairly. We introduce
a new concept of fairness so-called within-group fairness which requires that
AI models should be fair for those in a same sensitive group as well as those
in different sensitive groups. We materialize the concept of within-group
fairness by proposing corresponding mathematical definitions and developing
learning algorithms to control within-group fairness and between-group fairness
simultaneously. Numerical studies show that the proposed learning algorithms
improve within-group fairness without sacrificing accuracy as well as
between-group fairness.
Related papers
- Bridging the Fairness Divide: Achieving Group and Individual Fairness in Graph Neural Networks [9.806215623623684]
We propose a new concept of individual fairness within groups and a novel framework named Fairness for Group and Individual (FairGI)
Our approach not only outperforms other state-of-the-art models in terms of group fairness and individual fairness within groups, but also exhibits excellent performance in population-level individual fairness.
arXiv Detail & Related papers (2024-04-26T16:26:11Z) - A Canonical Data Transformation for Achieving Inter- and Within-group Fairness [17.820200610132265]
We introduce a formal definition of within-group fairness that maintains fairness among individuals from within the same group.
We propose a pre-processing framework to meet both inter- and within-group fairness criteria with little compromise in accuracy.
We apply this framework to the COMPAS risk assessment and Law School datasets and compare its performance to two regularization-based methods.
arXiv Detail & Related papers (2023-10-23T17:00:20Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - Towards A Holistic View of Bias in Machine Learning: Bridging
Algorithmic Fairness and Imbalanced Learning [8.602734307457387]
A key element in achieving algorithmic fairness with respect to protected groups is the simultaneous reduction of class and protected group imbalance in the underlying training data.
We propose a novel oversampling algorithm, Fair Oversampling, that addresses both skewed class distributions and protected features.
arXiv Detail & Related papers (2022-07-13T09:48:52Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Measuring Fairness Under Unawareness of Sensitive Attributes: A
Quantification-Based Approach [131.20444904674494]
We tackle the problem of measuring group fairness under unawareness of sensitive attributes.
We show that quantification approaches are particularly suited to tackle the fairness-under-unawareness problem.
arXiv Detail & Related papers (2021-09-17T13:45:46Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Metric-Free Individual Fairness with Cooperative Contextual Bandits [17.985752744098267]
Group fairness requires that different groups should be treated similarly which might be unfair to some individuals within a group.
Individual fairness remains understudied due to its reliance on problem-specific similarity metrics.
We propose a metric-free individual fairness and a cooperative contextual bandits algorithm.
arXiv Detail & Related papers (2020-11-13T03:10:35Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.