On the Problem of Underranking in Group-Fair Ranking
- URL: http://arxiv.org/abs/2010.06986v2
- Date: Thu, 18 Feb 2021 17:20:54 GMT
- Title: On the Problem of Underranking in Group-Fair Ranking
- Authors: Sruthi Gorantla, Amit Deshpande, Anand Louis
- Abstract summary: Bias in ranking systems can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes.
In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work.
We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees.
- Score: 8.963918049835375
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Search and recommendation systems, such as search engines, recruiting tools,
online marketplaces, news, and social media, output ranked lists of content,
products, and sometimes, people. Credit ratings, standardized tests, risk
assessments output only a score, but are also used implicitly for ranking. Bias
in such ranking systems, especially among the top ranks, can worsen social and
economic inequalities, polarize opinions, and reinforce stereotypes. On the
other hand, a bias correction for minority groups can cause more harm if
perceived as favoring group-fair outcomes over meritocracy. In this paper, we
formulate the problem of underranking in group-fair rankings, which was not
addressed in previous work. Most group-fair ranking algorithms post-process a
given ranking and output a group-fair ranking. We define underranking based on
how close the group-fair rank of each item is to its original rank, and prove a
lower bound on the trade-off achievable for simultaneous underranking and group
fairness in ranking. We give a fair ranking algorithm that takes any given
ranking and outputs another ranking with simultaneous underranking and group
fairness guarantees comparable to the lower bound we prove. Our algorithm works
with group fairness constraints for any number of groups. Our experimental
results confirm the theoretical trade-off between underranking and group
fairness, and also show that our algorithm achieves the best of both when
compared to the state-of-the-art baselines.
Related papers
- Measuring Bias in a Ranked List using Term-based Representations [50.69722973236967]
We propose a novel metric called TExFAIR (term exposure-based fairness)
TExFAIR measures fairness based on the term-based representation of groups in a ranked list.
Our experiments show that there is no strong correlation between TExFAIR and NFaiRR, which indicates that TExFAIR measures a different dimension of fairness than NFaiRR.
arXiv Detail & Related papers (2024-03-09T18:24:58Z) - Stability and Multigroup Fairness in Ranking with Uncertain Predictions [61.76378420347408]
Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings.
We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups.
Our work demonstrates that uncertainty aware rankings naturally interpolate between group and individual level fairness guarantees.
arXiv Detail & Related papers (2024-02-14T17:17:05Z) - Found in the Middle: Permutation Self-Consistency Improves Listwise Ranking in Large Language Models [63.714662435555674]
Large language models (LLMs) exhibit positional bias in how they use context.
We propose permutation self-consistency, a form of self-consistency over ranking list outputs of black-box LLMs.
Our approach improves scores from conventional inference by up to 7-18% for GPT-3.5 and 8-16% for LLaMA v2 (70B)
arXiv Detail & Related papers (2023-10-11T17:59:02Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Sampling Ex-Post Group-Fair Rankings [8.963918049835375]
We propose two algorithms to sample a random group-fair ranking from the distribution $D$ mentioned above.
Our problem formulation works even when there is implicit bias, incomplete relevance information, or only ordinal ranking is available.
We give empirical evidence that our algorithms compare favorably against recent baselines for fairness and ranking utility on real-world data sets.
arXiv Detail & Related papers (2022-03-02T05:59:26Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Fairness for Robust Learning to Rank [8.019491256870557]
We derive a new ranking system based on the first principles of distributional robustness.
We show that our approach provides better utility for highly fair rankings than existing baseline methods.
arXiv Detail & Related papers (2021-12-12T17:56:56Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets [28.537935838669423]
We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
arXiv Detail & Related papers (2020-10-04T02:53:09Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.