Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn
- URL: http://arxiv.org/abs/2202.07300v2
- Date: Mon, 22 Aug 2022 13:34:54 GMT
- Title: Choosing an algorithmic fairness metric for an online marketplace:
Detecting and quantifying algorithmic bias on LinkedIn
- Authors: YinYin Yu, Guillaume Saint-Jacques
- Abstract summary: We derive an algorithmic fairness metric from the fairness notion of equal opportunity for equally qualified candidates.
We use the proposed method to measure and quantify algorithmic bias with respect to gender of two algorithms used by LinkedIn.
- Score: 0.21756081703275995
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we derive an algorithmic fairness metric from the fairness
notion of equal opportunity for equally qualified candidates for recommendation
algorithms commonly used by two-sided marketplaces. We borrow from the economic
literature on discrimination to arrive at a test for detecting bias that is
solely attributable to the algorithm, as opposed to other sources such as
societal inequality or human bias on the part of platform users. We use the
proposed method to measure and quantify algorithmic bias with respect to gender
of two algorithms used by LinkedIn, a popular online platform used by job
seekers and employers. Moreover, we introduce a framework and the rationale for
distinguishing algorithmic bias from human bias, both of which can potentially
exist on a two-sided platform where algorithms make recommendations to human
users. Finally, we discuss the shortcomings of a few other common algorithmic
fairness metrics and why they do not capture the fairness notion of equal
opportunity for equally qualified candidates.
Related papers
- Exploring Gender Disparities in Bumble's Match Recommendations [0.27309692684728604]
We study bias and discrimination in the context of Bumble, an online dating platform in India.
We conduct an experiment to identify and address the presence of bias in the matching algorithms Bumble pushes to its users.
arXiv Detail & Related papers (2023-12-15T09:09:42Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Stochastic Differentially Private and Fair Learning [7.971065005161566]
We provide the first differentially private algorithm for fair learning that is guaranteed to converge.
Our framework is flexible enough to permit different fairness, including demographic parity and equalized odds.
Our algorithm can be applied to non-binary classification tasks with multiple (non-binary) sensitive attributes.
arXiv Detail & Related papers (2022-10-17T06:54:57Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Statistical discrimination in learning agents [64.78141757063142]
Statistical discrimination emerges in agent policies as a function of both the bias in the training population and of agent architecture.
We show that less discrimination emerges with agents that use recurrent neural networks, and when their training environment has less bias.
arXiv Detail & Related papers (2021-10-21T18:28:57Z) - Machine Learning for Online Algorithm Selection under Censored Feedback [71.6879432974126]
In online algorithm selection (OAS), instances of an algorithmic problem class are presented to an agent one after another, and the agent has to quickly select a presumably best algorithm from a fixed set of candidate algorithms.
For decision problems such as satisfiability (SAT), quality typically refers to the algorithm's runtime.
In this work, we revisit multi-armed bandit algorithms for OAS and discuss their capability of dealing with the problem.
We adapt them towards runtime-oriented losses, allowing for partially censored data while keeping a space- and time-complexity independent of the time horizon.
arXiv Detail & Related papers (2021-09-13T18:10:52Z) - Algorithms are not neutral: Bias in collaborative filtering [0.0]
Discussions of algorithmic bias tend to focus on examples where either the data or the people building the algorithms are biased.
This is illustrated with the example of collaborative filtering, which is known to suffer from popularity, and homogenizing biases.
Popularity and homogenizing biases have the effect of further marginalizing the already marginal.
arXiv Detail & Related papers (2021-05-03T17:28:43Z) - Auditing for Discrimination in Algorithms Delivering Job Ads [70.02478301291264]
We develop a new methodology for black-box auditing of algorithms for discrimination in the delivery of job advertisements.
Our first contribution is to identify the distinction between skew in ad delivery due to protected categories such as gender or race.
Second, we develop an auditing methodology that distinguishes between skew explainable by differences in qualifications from other factors.
Third, we apply our proposed methodology to two prominent targeted advertising platforms for job ads: Facebook and LinkedIn.
arXiv Detail & Related papers (2021-04-09T17:38:36Z) - Metrics and methods for a systematic comparison of fairness-aware
machine learning algorithms [0.0]
This study is the most comprehensive of its kind.
It considers fairness, predictive-performance, calibration quality, and speed of 28 different modelling pipelines.
We also found that fairness-aware algorithms can induce fairness without material drops in predictive power.
arXiv Detail & Related papers (2020-10-08T13:58:09Z) - User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets [28.537935838669423]
We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
arXiv Detail & Related papers (2020-10-04T02:53:09Z) - Towards causal benchmarking of bias in face analysis algorithms [54.19499274513654]
We develop an experimental method for measuring algorithmic bias of face analysis algorithms.
Our proposed method is based on generating synthetic transects'' of matched sample images.
We validate our method by comparing it to a study that employs the traditional observational method for analyzing bias in gender classification algorithms.
arXiv Detail & Related papers (2020-07-13T17:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.