An Outcome Test of Discrimination for Ranked Lists
- URL: http://arxiv.org/abs/2111.07889v1
- Date: Mon, 15 Nov 2021 16:42:57 GMT
- Title: An Outcome Test of Discrimination for Ranked Lists
- Authors: Jonathan Roth, Guillaume Saint-Jacques, YinYin Yu
- Abstract summary: We show that non-discrimination implies a system of moment inequalities.
We show how to statistically test the implied inequalities, and validate our approach in an application using data from LinkedIn.
- Score: 0.18416014644193063
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper extends Becker (1957)'s outcome test of discrimination to settings
where a (human or algorithmic) decision-maker produces a ranked list of
candidates. Ranked lists are particularly relevant in the context of online
platforms that produce search results or feeds, and also arise when human
decisionmakers express ordinal preferences over a list of candidates. We show
that non-discrimination implies a system of moment inequalities, which
intuitively impose that one cannot permute the position of a lower-ranked
candidate from one group with a higher-ranked candidate from a second group and
systematically improve the objective. Moreover, we show that that these moment
inequalities are the only testable implications of non-discrimination when the
auditor observes only outcomes and group membership by rank. We show how to
statistically test the implied inequalities, and validate our approach in an
application using data from LinkedIn.
Related papers
- Stability and Multigroup Fairness in Ranking with Uncertain Predictions [61.76378420347408]
Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings.
We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups.
Our work demonstrates that uncertainty aware rankings naturally interpolate between group and individual level fairness guarantees.
arXiv Detail & Related papers (2024-02-14T17:17:05Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Bounding Counterfactuals under Selection Bias [60.55840896782637]
We propose a first algorithm to address both identifiable and unidentifiable queries.
We prove that, in spite of the missingness induced by the selection bias, the likelihood of the available data is unimodal.
arXiv Detail & Related papers (2022-07-26T10:33:10Z) - Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn [0.21756081703275995]
We derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates.
We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn.
arXiv Detail & Related papers (2022-02-15T10:33:30Z) - Fair Sequential Selection Using Supervised Learning Models [11.577534539649374]
We consider a selection problem where sequentially arrived applicants apply for a limited number of positions/jobs.
We show that even with a pre-trained model that satisfies the common fairness notions, the selection outcomes may still be biased against certain demographic groups.
We introduce a new fairness notion, Equal Selection (ES),'' suitable for sequential selection problems and propose a post-processing approach to satisfy the ES fairness notion.
arXiv Detail & Related papers (2021-10-26T19:45:26Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Social Norm Bias: Residual Harms of Fairness-Aware Algorithms [21.50551404445654]
Social Norm Bias (SNoB) is a subtle but consequential type of discrimination that may be exhibited by automated decision-making systems.
We quantify SNoB by measuring how an algorithm's predictions are associated with conformity to gender norms.
We show that post-processing interventions do not mitigate this type of bias at all.
arXiv Detail & Related papers (2021-08-25T05:54:56Z) - Auditing for Discrimination in Algorithms Delivering Job Ads [70.02478301291264]
We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
arXiv Detail & Related papers (2021-04-09T17:38:36Z) - Intersectional Affirmative Action Policies for Top-k Candidates
Selection [3.4961413413444817]
We study the problem of selecting the top-k candidates from a pool of applicants, where each candidate is associated with a score indicating his/her aptitude.
We consider a situation in which some groups of candidates experience historical and present disadvantage that makes their chances of being accepted much lower than other groups.
We propose two algorithms to solve this problem, analyze them, and evaluate them experimentally using a dataset of university application scores and admissions to bachelor degrees in an OECD country.
arXiv Detail & Related papers (2020-07-29T12:27:18Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.