When Fair Ranking Meets Uncertain Inference
- URL: http://arxiv.org/abs/2105.02091v1
- Date: Wed, 5 May 2021 14:40:07 GMT
- Title: When Fair Ranking Meets Uncertain Inference
- Authors: Avijit Ghosh, Ritam Dutt, Christo Wilson
- Abstract summary: We show how demographic inferences drawn from real systems can lead to unfair rankings.
Our results suggest developers should not use inferred demographic data as input to fair ranking algorithms.
- Score: 5.33312437416961
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing fair ranking systems, especially those designed to be
demographically fair, assume that accurate demographic information about
individuals is available to the ranking algorithm. In practice, however, this
assumption may not hold -- in real-world contexts like ranking job applicants
or credit seekers, social and legal barriers may prevent algorithm operators
from collecting peoples' demographic information. In these cases, algorithm
operators may attempt to infer peoples' demographics and then supply these
inferences as inputs to the ranking algorithm.
In this study, we investigate how uncertainty and errors in demographic
inference impact the fairness offered by fair ranking algorithms. Using
simulations and three case studies with real datasets, we show how demographic
inferences drawn from real systems can lead to unfair rankings. Our results
suggest that developers should not use inferred demographic data as input to
fair ranking algorithms, unless the inferences are extremely accurate.
Related papers
- Auditing for Bias in Ad Delivery Using Inferred Demographic Attributes [50.37313459134418]
We study the effects of inference error on auditing for bias in one prominent application: black-box audit of ad delivery using paired ads.
We propose a way to mitigate the inference error when evaluating skew in ad delivery algorithms.
arXiv Detail & Related papers (2024-10-30T18:57:03Z) - Algorithms, Incentives, and Democracy [0.0]
We show how optimal classification by an algorithm designer can affect the distribution of behavior in a population.
We then look at the effect of democratizing the rewards and punishments, or stakes, to the algorithmic classification to consider how a society can potentially stem (or facilitate!) predatory classification.
arXiv Detail & Related papers (2023-07-05T14:22:01Z) - Whole Page Unbiased Learning to Rank [59.52040055543542]
Unbiased Learning to Rank(ULTR) algorithms are proposed to learn an unbiased ranking model with biased click data.
We propose a Bias Agnostic whole-page unbiased Learning to rank algorithm, named BAL, to automatically find the user behavior model.
Experimental results on a real-world dataset verify the effectiveness of the BAL.
arXiv Detail & Related papers (2022-10-19T16:53:08Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Understanding Unfairness in Fraud Detection through Model and Data Bias
Interactions [4.159343412286401]
We argue that algorithmic unfairness stems from interactions between models and biases in the data.
We study a set of hypotheses regarding the fairness-accuracy trade-offs that fairness-blind ML algorithms exhibit under different data bias settings.
arXiv Detail & Related papers (2022-07-13T15:18:30Z) - Demographic-Reliant Algorithmic Fairness: Characterizing the Risks of
Demographic Data Collection in the Pursuit of Fairness [0.0]
We consider calls to collect more data on demographics to enable algorithmic fairness.
We show how these techniques largely ignore broader questions of data governance and systemic oppression.
arXiv Detail & Related papers (2022-04-18T04:50:09Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Can Active Learning Preemptively Mitigate Fairness Issues? [66.84854430781097]
dataset bias is one of the prevailing causes of unfairness in machine learning.
We study whether models trained with uncertainty-based ALs are fairer in their decisions with respect to a protected class.
We also explore the interaction of algorithmic fairness methods such as gradient reversal (GRAD) and BALD.
arXiv Detail & Related papers (2021-04-14T14:20:22Z) - Balancing Biases and Preserving Privacy on Balanced Faces in the Wild [50.915684171879036]
There are demographic biases present in current facial recognition (FR) models.
We introduce our Balanced Faces in the Wild dataset to measure these biases across different ethnic and gender subgroups.
We find that relying on a single score threshold to differentiate between genuine and imposters sample pairs leads to suboptimal results.
We propose a novel domain adaptation learning scheme that uses facial features extracted from state-of-the-art neural networks.
arXiv Detail & Related papers (2021-03-16T15:05:49Z) - Does Fair Ranking Improve Minority Outcomes? Understanding the Interplay
of Human and Algorithmic Biases in Online Hiring [9.21721532941863]
We analyze various sources of gender biases in online hiring platforms, including the job context and inherent biases of the employers.
Our results demonstrate that while fair ranking algorithms generally improve the selection rates of underrepresented minorities, their effectiveness relies heavily on the job contexts and candidate profiles.
arXiv Detail & Related papers (2020-12-01T11:45:27Z) - "What We Can't Measure, We Can't Understand": Challenges to Demographic
Data Procurement in the Pursuit of Fairness [0.0]
algorithmic fairness practitioners often do not have access to demographic data they feel they need to detect bias in practice.
We investigated this dilemma through semi-structured interviews with 38 practitioners and professionals either working in or adjacent to algorithmic fairness.
Participants painted a complex picture of what demographic data availability and use look like on the ground.
arXiv Detail & Related papers (2020-10-30T21:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.