Intersectional Affirmative Action Policies for Top-k Candidates
Selection
- URL: http://arxiv.org/abs/2007.14775v2
- Date: Fri, 5 Mar 2021 16:21:37 GMT
- Title: Intersectional Affirmative Action Policies for Top-k Candidates
Selection
- Authors: Giorgio Barnabo', Carlos Castillo, Michael Mathioudakis, Sergio Celis
- Abstract summary: We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
- Score: 3.4961413413444817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the problem of selecting the top-k candidates from a pool of
applicants, where each candidate is associated with a score indicating his/her
aptitude. Depending on the specific scenario, such as job search or college
admissions, these scores may be the results of standardized tests or other
predictors of future performance and utility. We consider a situation in which
some groups of candidates experience historical and present disadvantage that
makes their chances of being accepted much lower than other groups. In these
circumstances, we wish to apply an affirmative action policy to reduce
acceptance rate disparities, while avoiding any large decrease in the aptitude
of the candidates that are eventually selected. Our algorithmic design is
motivated by the frequently observed phenomenon that discrimination
disproportionately affects individuals who simultaneously belong to multiple
disadvantaged groups, defined along intersecting dimensions such as gender,
race, sexual orientation, socio-economic status, and disability. In short, our
algorithm's objective is to simultaneously: select candidates with high
utility, and level up the representation of disadvantaged intersectional
classes. This naturally involves trade-offs and is computationally challenging
due to the the combinatorial explosion of potential subgroups as more
attributes are considered. We propose two algorithms to solve this problem,
analyze them, and evaluate them experimentally using a dataset of university
application scores and admissions to bachelor degrees in an OECD country. Our
conclusion is that it is possible to significantly reduce disparities in
admission rates affecting intersectional classes with a small loss in terms of
selected candidate aptitude. To the best of our knowledge, we are the first to
study fairness constraints with regards to intersectional classes in the
context of top-k selection.
Related papers
- The Root Shapes the Fruit: On the Persistence of Gender-Exclusive Harms in Aligned Language Models [58.130894823145205]
We center transgender, nonbinary, and other gender-diverse identities to investigate how alignment procedures interact with pre-existing gender-diverse bias.
Our findings reveal that DPO-aligned models are particularly sensitive to supervised finetuning.
We conclude with recommendations tailored to DPO and broader alignment practices.
arXiv Detail & Related papers (2024-11-06T06:50:50Z) - Algorithms for College Admissions Decision Support: Impacts of Policy Change and Inherent Variability [18.289154814012996]
We show that removing race data from a developed applicant ranking algorithm reduces the diversity of the top-ranked pool without meaningfully increasing the academic merit of that pool.
We measure the impact of policy change on individuals by comparing the arbitrariness in applicant rank attributable to policy change to the arbitrariness attributable to randomness.
arXiv Detail & Related papers (2024-06-24T14:59:30Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - In Search of Insights, Not Magic Bullets: Towards Demystification of the
Model Selection Dilemma in Heterogeneous Treatment Effect Estimation [92.51773744318119]
This paper empirically investigates the strengths and weaknesses of different model selection criteria.
We highlight that there is a complex interplay between selection strategies, candidate estimators and the data used for comparing them.
arXiv Detail & Related papers (2023-02-06T16:55:37Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Fairness in Selection Problems with Strategic Candidates [9.4148805532663]
We study how the strategic aspect affects fairness in selection problems.
A population of rational candidates compete by choosing an effort level to increase their quality.
We characterize the (unique) equilibrium of this game in the different parameters' regimes.
arXiv Detail & Related papers (2022-05-24T17:03:32Z) - An Outcome Test of Discrimination for Ranked Lists [0.18416014644193063]
We show that non-discrimination implies a system of moment inequalities.
We show how to statistically test the implied inequalities, and validate our approach in an application using data from LinkedIn.
arXiv Detail & Related papers (2021-11-15T16:42:57Z) - Fair Sequential Selection Using Supervised Learning Models [11.577534539649374]
We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs.
We show that even with a pre-trained model that satisfies the common fairness notions, the selection outcomes may still be biased against certain demographic groups.
We introduce a new fairness notion, Equal Selection (ES),'' suitable for sequential selection problems and propose a post-processing approach to satisfy the ES fairness notion.
arXiv Detail & Related papers (2021-10-26T19:45:26Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Quota-based debiasing can decrease representation of already
underrepresented groups [5.1135133995376085]
We show that quota-based debiasing based on a single attribute can worsen the representation of already underrepresented groups and decrease overall fairness of selection.
Our results demonstrate the importance of including all relevant attributes in debiasing procedures and that more efforts need to be put into eliminating the root causes of inequalities.
arXiv Detail & Related papers (2020-06-13T14:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.