Fair Ranking as Fair Division: Impact-Based Individual Fairness in
Ranking
- URL: http://arxiv.org/abs/2206.07247v1
- Date: Wed, 15 Jun 2022 02:20:30 GMT
- Title: Fair Ranking as Fair Division: Impact-Based Individual Fairness in
Ranking
- Authors: Yuta Saito and Thorsten Joachims
- Abstract summary: We argue that any particular choice of such a link function may be difficult to defend.
This not only avoids the need to choose a link function, but also more meaningfully quantifies the impact on the items beyond exposure.
- Score: 36.42838320396534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rankings have become the primary interface in two-sided online markets. Many
have noted that the rankings not only affect the satisfaction of the users
(e.g., customers, listeners, employers, travelers), but that the position in
the ranking allocates exposure -- and thus economic opportunity -- to the
ranked items (e.g., articles, products, songs, job seekers, restaurants,
hotels). This has raised questions of fairness to the items, and most existing
works have addressed fairness by explicitly linking item exposure to item
relevance. However, we argue that any particular choice of such a link function
may be difficult to defend, and we show that the resulting rankings can still
be unfair. To avoid these shortcomings, we develop a new axiomatic approach
that is rooted in principles of fair division. This not only avoids the need to
choose a link function, but also more meaningfully quantifies the impact on the
items beyond exposure. Our axioms of envy-freeness and dominance over uniform
ranking postulate that for a fair ranking policy every item should prefer their
own rank allocation over that of any other item, and that no item should be
actively disadvantaged by the rankings. To compute ranking policies that are
fair according to these axioms, we propose a new ranking objective related to
the Nash Social Welfare. We show that the solution has guarantees regarding its
envy-freeness, its dominance over uniform rankings for every item, and its
Pareto optimality. In contrast, we show that conventional exposure-based
fairness can produce large amounts of envy and have a highly disparate impact
on the items. Beyond these theoretical results, we illustrate empirically how
our framework controls the trade-off between impact-based individual item
fairness and user utility.
Related papers
- Stability and Multigroup Fairness in Ranking with Uncertain Predictions [61.76378420347408]
Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings.
We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups.
Our work demonstrates that uncertainty aware rankings naturally interpolate between group and individual level fairness guarantees.
arXiv Detail & Related papers (2024-02-14T17:17:05Z) - Sampling Individually-Fair Rankings that are Always Group Fair [9.333939443470944]
A fair ranking task asks to rank a set of items to maximize utility subject to satisfying group-fairness constraints.
Recent works identify uncertainty in the utilities of items as a primary cause of unfairness.
We give an efficient algorithm that samples rankings from an individually-fair distribution while ensuring that every output ranking is group fair.
arXiv Detail & Related papers (2023-06-21T01:26:34Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - Learning Fair Node Representations with Graph Counterfactual Fairness [56.32231787113689]
We propose graph counterfactual fairness, which considers the biases led by the above facts.
We generate counterfactuals corresponding to perturbations on each node's and their neighbors' sensitive attributes.
Our framework outperforms the state-of-the-art baselines in graph counterfactual fairness.
arXiv Detail & Related papers (2022-01-10T21:43:44Z) - Incentives for Item Duplication under Fair Ranking Policies [69.14168955766847]
We study the behaviour of different fair ranking policies in the presence of duplicates.
We find that fairness-aware ranking policies may conflict with diversity, due to their potential to incentivize duplication more than policies solely focused on relevance.
arXiv Detail & Related papers (2021-10-29T11:11:15Z) - User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets [28.537935838669423]
We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
arXiv Detail & Related papers (2020-10-04T02:53:09Z) - On the Problem of Underranking in Group-Fair Ranking [8.963918049835375]
Bias in ranking systems can worsen social and economic inequalities, polarize opinions, and reinforce stereotypes.
In this paper, we formulate the problem of underranking in group-fair rankings, which was not addressed in previous work.
We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees.
arXiv Detail & Related papers (2020-09-24T14:56:10Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.