Characterizing Intersectional Group Fairness with Worst-Case Comparisons
- URL: http://arxiv.org/abs/2101.01673v3
- Date: Mon, 1 Feb 2021 03:10:01 GMT
- Title: Characterizing Intersectional Group Fairness with Worst-Case Comparisons
- Authors: Avijit Ghosh, Lea Genuit, Mary Reagan
- Abstract summary: We discuss why fairness metrics need to be looked at under the lens of intersectionality.
We suggest a simple worst case comparison method to expand the definitions of existing group fairness metrics.
We conclude with the social, legal and political framework to handle intersectional fairness in the modern context.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine Learning or Artificial Intelligence algorithms have gained
considerable scrutiny in recent times owing to their propensity towards
imitating and amplifying existing prejudices in society. This has led to a
niche but growing body of work that identifies and attempts to fix these
biases. A first step towards making these algorithms more fair is designing
metrics that measure unfairness. Most existing work in this field deals with
either a binary view of fairness (protected vs. unprotected groups) or
politically defined categories (race or gender). Such categorization misses the
important nuance of intersectionality - biases can often be amplified in
subgroups that combine membership from different categories, especially if such
a subgroup is particularly underrepresented in historical platforms of
opportunity.
In this paper, we discuss why fairness metrics need to be looked at under the
lens of intersectionality, identify existing work in intersectional fairness,
suggest a simple worst case comparison method to expand the definitions of
existing group fairness metrics to incorporate intersectionality, and finally
conclude with the social, legal and political framework to handle
intersectional fairness in the modern context.
Related papers
- Causal Context Connects Counterfactual Fairness to Robust Prediction and
Group Fairness [15.83823345486604]
We motivatefactual fairness by showing that there is not a fundamental trade-off between fairness and accuracy.
Counterfactual fairness can sometimes be tested by measuring relatively simple group fairness metrics.
arXiv Detail & Related papers (2023-10-30T16:07:57Z) - Fair Without Leveling Down: A New Intersectional Fairness Definition [1.0958014189747356]
We propose a new definition called the $alpha$-Intersectional Fairness, which combines the absolute and the relative performance across sensitive groups.
We benchmark multiple popular in-processing fair machine learning approaches using our new fairness definition and show that they do not achieve any improvement over a simple baseline.
arXiv Detail & Related papers (2023-05-21T16:15:12Z) - Factoring the Matrix of Domination: A Critical Review and Reimagination
of Intersectionality in AI Fairness [55.037030060643126]
Intersectionality is a critical framework that allows us to examine how social inequalities persist.
We argue that adopting intersectionality as an analytical framework is pivotal to effectively operationalizing fairness.
arXiv Detail & Related papers (2023-03-16T21:02:09Z) - DualFair: Fair Representation Learning at Both Group and Individual
Levels via Contrastive Self-supervision [73.80009454050858]
This work presents a self-supervised model, called DualFair, that can debias sensitive attributes like gender and race from learned representations.
Our model jointly optimize for two fairness criteria - group fairness and counterfactual fairness.
arXiv Detail & Related papers (2023-03-15T07:13:54Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - Fairness for Image Generation with Uncertain Sensitive Attributes [97.81354305427871]
This work tackles the issue of fairness in the context of generative procedures, such as image super-resolution.
While traditional group fairness definitions are typically defined with respect to specified protected groups, we emphasize that there are no ground truth identities.
We show that the natural extension of demographic parity is strongly dependent on the grouping, and emphimpossible to achieve obliviously.
arXiv Detail & Related papers (2021-06-23T06:17:17Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification [0.48733623015338234]
One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
arXiv Detail & Related papers (2020-10-26T11:35:39Z) - A Pairwise Fair and Community-preserving Approach to k-Center Clustering [34.386585230600716]
Clustering is a foundational problem in machine learning with numerous applications.
We define two new types of fairness in the clustering setting, pairwise fairness and community preservation.
arXiv Detail & Related papers (2020-07-14T22:32:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.