Sampling Individually-Fair Rankings that are Always Group Fair
- URL: http://arxiv.org/abs/2306.11964v1
- Date: Wed, 21 Jun 2023 01:26:34 GMT
- Title: Sampling Individually-Fair Rankings that are Always Group Fair
- Authors: Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis
- Abstract summary: A fair ranking task asks to rank a set of items to maximize utility subject to satisfying group-fairness constraints.
Recent works identify uncertainty in the utilities of items as a primary cause of unfairness.
We give an efficient algorithm that samples rankings from an individually-fair distribution while ensuring that every output ranking is group fair.
- Score: 9.333939443470944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Rankings on online platforms help their end-users find the relevant
information -- people, news, media, and products -- quickly. Fair ranking
tasks, which ask to rank a set of items to maximize utility subject to
satisfying group-fairness constraints, have gained significant interest in the
Algorithmic Fairness, Information Retrieval, and Machine Learning literature.
Recent works, however, identify uncertainty in the utilities of items as a
primary cause of unfairness and propose introducing randomness in the output.
This randomness is carefully chosen to guarantee an adequate representation of
each item (while accounting for the uncertainty). However, due to this
randomness, the output rankings may violate group fairness constraints. We give
an efficient algorithm that samples rankings from an individually-fair
distribution while ensuring that every output ranking is group fair. The
expected utility of the output ranking is at least $\alpha$ times the utility
of the optimal fair solution. Here, $\alpha$ depends on the utilities,
position-discounts, and constraints -- it approaches 1 as the range of
utilities or the position-discounts shrinks, or when utilities satisfy
distributional assumptions. Empirically, we observe that our algorithm achieves
individual and group fairness and that Pareto dominates the state-of-the-art
baselines.
Related papers
- Fair Allocation in Dynamic Mechanism Design [57.66441610380448]
We consider a problem where an auctioneer sells an indivisible good to groups of buyers in every round, for a total of $T$ rounds.
The auctioneer aims to maximize their discounted overall revenue while adhering to a fairness constraint that guarantees a minimum average allocation for each group.
arXiv Detail & Related papers (2024-05-31T19:26:05Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Explainable Disparity Compensation for Efficient Fair Ranking [0.3759936323189418]
Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data.
Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees.
In this paper we propose easily explainable data-driven compensatory measures for ranking functions.
arXiv Detail & Related papers (2023-07-25T09:12:50Z) - FFB: A Fair Fairness Benchmark for In-Processing Group Fairness Methods [84.1077756698332]
This paper introduces the Fair Fairness Benchmark (textsfFFB), a benchmarking framework for in-processing group fairness methods.
We provide a comprehensive analysis of state-of-the-art methods to ensure different notions of group fairness.
arXiv Detail & Related papers (2023-06-15T19:51:28Z) - Fairness in Matching under Uncertainty [78.39459690570531]
algorithmic two-sided marketplaces have drawn attention to the issue of fairness in such settings.
We axiomatize a notion of individual fairness in the two-sided marketplace setting which respects the uncertainty in the merits.
We design a linear programming framework to find fair utility-maximizing distributions over allocations.
arXiv Detail & Related papers (2023-02-08T00:30:32Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - Fair Ranking with Noisy Protected Attributes [25.081136190260015]
We study the fair-ranking problem under a model where socially-salient attributes of items are randomly and independently perturbed.
We present a fair-ranking framework that incorporates group fairness requirements along with probabilistic information about perturbations in socially-salient attributes.
arXiv Detail & Related papers (2022-11-30T15:22:28Z) - FaiREE: Fair Classification with Finite-Sample and Distribution-Free
Guarantee [40.10641140860374]
FaiREE is a fair classification algorithm that can satisfy group fairness constraints with finite-sample and distribution-free theoretical guarantees.
FaiREE is shown to have favorable performance over state-of-the-art algorithms.
arXiv Detail & Related papers (2022-11-28T05:16:20Z) - How Robust is Your Fairness? Evaluating and Sustaining Fairness under
Unseen Distribution Shifts [107.72786199113183]
We propose a novel fairness learning method termed CUrvature MAtching (CUMA)
CUMA achieves robust fairness generalizable to unseen domains with unknown distributional shifts.
We evaluate our method on three popular fairness datasets.
arXiv Detail & Related papers (2022-07-04T02:37:50Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.