Measuring Bias in a Ranked List using Term-based Representations
- URL: http://arxiv.org/abs/2403.05975v1
- Date: Sat, 9 Mar 2024 18:24:58 GMT
- Title: Measuring Bias in a Ranked List using Term-based Representations
- Authors: Amin Abolghasemi, Leif Azzopardi, Arian Askari, Maarten de Rijke,
Suzan Verberne
- Abstract summary: We propose a novel metric called TExFAIR (term exposure-based fairness)
TExFAIR measures fairness based on the term-based representation of groups in a ranked list.
Our experiments show that there is no strong correlation between TExFAIR and NFaiRR, which indicates that TExFAIR measures a different dimension of fairness than NFaiRR.
- Score: 50.69722973236967
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In most recent studies, gender bias in document ranking is evaluated with the
NFaiRR metric, which measures bias in a ranked list based on an aggregation
over the unbiasedness scores of each ranked document. This perspective in
measuring the bias of a ranked list has a key limitation: individual documents
of a ranked list might be biased while the ranked list as a whole balances the
groups' representations. To address this issue, we propose a novel metric
called TExFAIR (term exposure-based fairness), which is based on two new
extensions to a generic fairness evaluation framework, attention-weighted
ranking fairness (AWRF). TExFAIR assesses fairness based on the term-based
representation of groups in a ranked list: (i) an explicit definition of
associating documents to groups based on probabilistic term-level associations,
and (ii) a rank-biased discounting factor (RBDF) for counting
non-representative documents towards the measurement of the fairness of a
ranked list. We assess TExFAIR on the task of measuring gender bias in passage
ranking, and study the relationship between TExFAIR and NFaiRR. Our experiments
show that there is no strong correlation between TExFAIR and NFaiRR, which
indicates that TExFAIR measures a different dimension of fairness than NFaiRR.
With TExFAIR, we extend the AWRF framework to allow for the evaluation of
fairness in settings with term-based representations of groups in documents in
a ranked list.
Related papers
- Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and
Ex-Post Fairness [5.349671569838342]
In learning-to-rank, optimizing only the relevance can cause representational harm to certain categories of items.
In this paper, we propose a novel algorithm that maximizes expected relevance over those rankings that satisfy given representation constraints.
arXiv Detail & Related papers (2023-08-25T08:27:43Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Matched Pair Calibration for Ranking Fairness [2.580183306478581]
We propose a test of fairness in score-based ranking systems called matched pair calibration.
We show how our approach generalizes the fairness intuitions of calibration from a binary classification setting to ranking.
arXiv Detail & Related papers (2023-06-06T15:32:30Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - MANI-Rank: Multiple Attribute and Intersectional Group Fairness for
Consensus Ranking [6.231376714841276]
Group fairness in rankings and in particular rank aggregation remains in its infancy.
Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute.
Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers.
We are the first to define and solve this open Multi-attribute Fair Consensus Ranking problem.
arXiv Detail & Related papers (2022-07-20T16:36:20Z) - Measuring Fairness of Text Classifiers via Prediction Sensitivity [63.56554964580627]
ACCUMULATED PREDICTION SENSITIVITY measures fairness in machine learning models based on the model's prediction sensitivity to perturbations in input features.
We show that the metric can be theoretically linked with a specific notion of group fairness (statistical parity) and individual fairness.
arXiv Detail & Related papers (2022-03-16T15:00:33Z) - Overview of the TREC 2020 Fair Ranking Track [64.16623297717642]
This paper provides an overview of the NIST TREC 2020 Fair Ranking track.
The central goal of the Fair Ranking track is to provide fair exposure to different groups of authors.
arXiv Detail & Related papers (2021-08-11T10:22:05Z) - Uncovering Latent Biases in Text: Method and Application to Peer Review [38.726731935235584]
We introduce a novel framework to quantify bias in text caused by the visibility of subgroup membership indicators.
We apply our framework to quantify biases in the text of peer reviews from a reputed machine learning conference.
arXiv Detail & Related papers (2020-10-29T01:24:19Z) - Overview of the TREC 2019 Fair Ranking Track [65.15263872493799]
The goal of the TREC Fair Ranking track was to develop a benchmark for evaluating retrieval systems in terms of fairness to different content providers.
This paper presents an overview of the track, including the task definition, descriptions of the data and the annotation process.
arXiv Detail & Related papers (2020-03-25T21:34:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.