Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay
of Human and Algorithmic Biases in Online Hiring
- URL: http://arxiv.org/abs/2012.00423v2
- Date: Thu, 8 Apr 2021 09:31:51 GMT
- Title: Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay
of Human and Algorithmic Biases in Online Hiring
- Authors: Tom S\"uhr, Sophie Hilgard, Himabindu Lakkaraju
- Abstract summary: We analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of the employers.
Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
- Score: 9.21721532941863
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ranking algorithms are being widely employed in various online hiring
platforms including LinkedIn, TaskRabbit, and Fiverr. Prior research has
demonstrated that ranking algorithms employed by these platforms are prone to a
variety of undesirable biases, leading to the proposal of fair ranking
algorithms (e.g., Det-Greedy) which increase exposure of underrepresented
candidates. However, there is little to no work that explores whether fair
ranking algorithms actually improve real world outcomes (e.g., hiring
decisions) for underrepresented groups. Furthermore, there is no clear
understanding as to how other factors (e.g., job context, inherent biases of
the employers) may impact the efficacy of fair ranking in practice. In this
work, we analyze various sources of gender biases in online hiring platforms,
including the job context and inherent biases of employers and establish how
these factors interact with ranking algorithms to affect hiring decisions. To
the best of our knowledge, this work makes the first attempt at studying the
interplay between the aforementioned factors in the context of online hiring.
We carry out a largescale user study simulating online hiring scenarios with
data from TaskRabbit, a popular online freelancing site. Our results
demonstrate that while fair ranking algorithms generally improve the selection
rates of underrepresented minorities, their effectiveness relies heavily on the
job contexts and candidate profiles.
Related papers
- Understanding the performance gap between online and offline alignment algorithms [63.137832242488926]
We show that offline algorithms train policy to become good at pairwise classification, while online algorithms are good at generations.
This hints at a unique interplay between discriminative and generative capabilities, which is greatly impacted by the sampling process.
Our study sheds light on the pivotal role of on-policy sampling in AI alignment, and hints at certain fundamental challenges of offline alignment algorithms.
arXiv Detail & Related papers (2024-05-14T09:12:30Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn [0.21756081703275995]
We derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates.
We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn.
arXiv Detail & Related papers (2022-02-15T10:33:30Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - When Fair Ranking Meets Uncertain Inference [5.33312437416961]
We show how demographic inferences drawn from real systems can lead to unfair rankings.
Our results suggest developers should not use inferred demographic data as input to fair ranking algorithms.
arXiv Detail & Related papers (2021-05-05T14:40:07Z) - Algorithms are not neutral: Bias in collaborative filtering [0.0]
Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased.
This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases.
Popularity and homogenizing biases have the effect of further marginalizing the already marginal.
arXiv Detail & Related papers (2021-05-03T17:28:43Z) - Auditing for Discrimination in Algorithms Delivering Job Ads [70.02478301291264]
We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
arXiv Detail & Related papers (2021-04-09T17:38:36Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets [28.537935838669423]
We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
arXiv Detail & Related papers (2020-10-04T02:53:09Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.