User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets
- URL: http://arxiv.org/abs/2010.01470v3
- Date: Sun, 12 Sep 2021 13:39:51 GMT
- Title: User Fairness, Item Fairness, and Diversity for Rankings in Two-Sided
Markets
- Authors: Lequn Wang and Thorsten Joachims
- Abstract summary: We show that user fairness, item fairness and diversity are fundamentally different concepts.
We present the first ranking algorithm that explicitly enforces all three desiderata.
- Score: 28.537935838669423
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ranking items by their probability of relevance has long been the goal of
conventional ranking systems. While this maximizes traditional criteria of
ranking performance, there is a growing understanding that it is an
oversimplification in online platforms that serve not only a diverse user
population, but also the producers of the items. In particular, ranking
algorithms are expected to be fair in how they serve all groups of users -- not
just the majority group -- and they also need to be fair in how they divide
exposure among the items. These fairness considerations can partially be met by
adding diversity to the rankings, as done in several recent works. However, we
show in this paper that user fairness, item fairness and diversity are
fundamentally different concepts. In particular, we find that algorithms that
consider only one of the three desiderata can fail to satisfy and even harm the
other two. To overcome this shortcoming, we present the first ranking algorithm
that explicitly enforces all three desiderata. The algorithm optimizes user and
item fairness as a convex optimization problem which can be solved optimally.
From its solution, a ranking policy can be derived via a novel Birkhoff-von
Neumann decomposition algorithm that optimizes diversity. Beyond the
theoretical analysis, we investigate empirically on a new benchmark dataset how
effectively the proposed ranking algorithm can control user fairness, item
fairness and diversity, as well as the trade-offs between them.
Related papers
- Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - Fast online ranking with fairness of exposure [29.134493256287072]
We show that our algorithm is computationally fast, generating rankings on-the-fly with computation cost dominated by the sort operation, memory efficient, and has strong theoretical guarantees.
Compared to baseline policies that only maximize user-side performance, our algorithm allows to incorporate complex fairness of exposure criteria in the recommendations with negligible computational overhead.
arXiv Detail & Related papers (2022-09-13T12:35:36Z) - Counterfactual Fairness Is Basically Demographic Parity [0.0]
Making fair decisions is crucial to ethically implementing machine learning algorithms in social settings.
We show that an algorithm which satisfies counterfactual fairness also satisfies demographic parity.
We formalize a concrete fairness goal: to preserve the order of individuals within protected groups.
arXiv Detail & Related papers (2022-08-07T23:38:59Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Incentives for Item Duplication under Fair Ranking Policies [69.14168955766847]
We study the behaviour of different fair ranking policies in the presence of duplicates.
We find that fairness-aware ranking policies may conflict with diversity, due to their potential to incentivize duplication more than policies solely focused on relevance.
arXiv Detail & Related papers (2021-10-29T11:11:15Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z) - SetRank: A Setwise Bayesian Approach for Collaborative Ranking from
Implicit Feedback [50.13745601531148]
We propose a novel setwise Bayesian approach for collaborative ranking, namely SetRank, to accommodate the characteristics of implicit feedback in recommender system.
Specifically, SetRank aims at maximizing the posterior probability of novel setwise preference comparisons.
We also present the theoretical analysis of SetRank to show that the bound of excess risk can be proportional to $sqrtM/N$.
arXiv Detail & Related papers (2020-02-23T06:40:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.