Quota-based debiasing can decrease representation of already
underrepresented groups
- URL: http://arxiv.org/abs/2006.07647v1
- Date: Sat, 13 Jun 2020 14:26:42 GMT
- Title: Quota-based debiasing can decrease representation of already
underrepresented groups
- Authors: Ivan Smirnov, Florian Lemmerich, Markus Strohmaier
- Abstract summary: We show that quota-based debiasing based on a single attribute can worsen the representation of already underrepresented groups and decrease overall fairness of selection.
Our results demonstrate the importance of including all relevant attributes in debiasing procedures and that more efforts need to be put into eliminating the root causes of inequalities.
- Score: 5.1135133995376085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many important decisions in societies such as school admissions, hiring, or
elections are based on the selection of top-ranking individuals from a larger
pool of candidates. This process is often subject to biases, which typically
manifest as an under-representation of certain groups among the selected or
accepted individuals. The most common approach to this issue is debiasing, for
example via the introduction of quotas that ensure proportional representation
of groups with respect to a certain, often binary attribute. Cases include
quotas for women on corporate boards or ethnic quotas in elections. This,
however, has the potential to induce changes in representation with respect to
other attributes. For the case of two correlated binary attributes we show that
quota-based debiasing based on a single attribute can worsen the representation
of already underrepresented groups and decrease overall fairness of selection.
We use several data sets from a broad range of domains from recidivism risk
assessments to scientific citations to assess this effect in real-world
settings. Our results demonstrate the importance of including all relevant
attributes in debiasing procedures and that more efforts need to be put into
eliminating the root causes of inequalities as purely numerical solutions such
as quota-based debiasing might lead to unintended consequences.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - JobFair: A Framework for Benchmarking Gender Hiring Bias in Large Language Models [12.12628747941818]
This paper presents a novel framework for benchmarking hierarchical gender hiring bias in Large Language Models (LLMs) for resume scoring.
We introduce a new construct grounded in labour economics, legal principles, and critiques of current bias benchmarks.
We analyze gender hiring biases in ten state-of-the-art LLMs.
arXiv Detail & Related papers (2024-06-17T09:15:57Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - When mitigating bias is unfair: multiplicity and arbitrariness in algorithmic group fairness [8.367620276482056]
We introduce the FRAME (FaiRness Arbitrariness and Multiplicity Evaluation) framework, which evaluates bias mitigation through five dimensions.
Applying FRAME to various bias mitigation approaches across key datasets allows us to exhibit significant differences in the behaviors of debiasing methods.
These findings highlight the limitations of current fairness criteria and the inherent arbitrariness in the debiasing process.
arXiv Detail & Related papers (2023-02-14T16:53:52Z) - Selection in the Presence of Implicit Bias: The Advantage of
Intersectional Constraints [21.230595548980574]
In selection processes such as hiring, promotion, and college admissions, implicit bias toward socially-salient attributes is known to produce persistent inequality.
We show that, in the intersectional case, the aforementioned non-intersectional constraints can only recover part of the total utility achievable in the absence of implicit bias.
arXiv Detail & Related papers (2022-02-03T16:21:50Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - Contrastive Examples for Addressing the Tyranny of the Majority [83.93825214500131]
We propose to create a balanced training dataset, consisting of the original dataset plus new data points in which the group memberships are intervened.
We show that current generative adversarial networks are a powerful tool for learning these data points, called contrastive examples.
arXiv Detail & Related papers (2020-04-14T14:06:44Z) - Interventions for Ranking in the Presence of Implicit Bias [34.23230188778088]
Implicit bias is the unconscious attribution of particular qualities (or lack thereof) to a member from a particular social group.
Rooney Rule is a constraint to improve the utility of the outcome for certain cases of the subset selection problem.
We present a family of simple and interpretable constraints and show that they can optimally mitigate implicit bias.
arXiv Detail & Related papers (2020-01-23T19:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.