Individually Fair Ranking
- URL: http://arxiv.org/abs/2103.11023v1
- Date: Fri, 19 Mar 2021 21:17:11 GMT
- Title: Individually Fair Ranking
- Authors: Amanda Bower, Hamid Eftekhari, Mikhail Yurochkin, Yuekai Sun
- Abstract summary: We develop an algorithm to train individually fair learning-to-rank models.
The proposed approach ensures items from minority groups appear alongside similar items from majority groups.
- Score: 23.95661284311917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We develop an algorithm to train individually fair learning-to-rank (LTR)
models. The proposed approach ensures items from minority groups appear
alongside similar items from majority groups. This notion of fair ranking is
based on the definition of individual fairness from supervised learning and is
more nuanced than prior fair LTR approaches that simply ensure the ranking
model provides underrepresented items with a basic level of exposure. The crux
of our method is an optimal transport-based regularizer that enforces
individual fairness and an efficient algorithm for optimizing the regularizer.
We show that our approach leads to certifiably individually fair LTR models and
demonstrate the efficacy of our method on ranking tasks subject to demographic
biases.
Related papers
- FairLoRA: Unpacking Bias Mitigation in Vision Models with Fairness-Driven Low-Rank Adaptation [3.959853359438669]
We introduce FairLoRA, a novel fairness-specific regularizer for Low Rank Adaptation (LoRA)
Our results demonstrate that the need for higher ranks to mitigate bias is not universal; it depends on factors such as the pre-trained model, dataset, and task.
arXiv Detail & Related papers (2024-10-22T18:50:36Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Optimizing Group-Fair Plackett-Luce Ranking Models for Relevance and
Ex-Post Fairness [5.349671569838342]
In learning-to-rank, optimizing only the relevance can cause representational harm to certain categories of items.
In this paper, we propose a novel algorithm that maximizes expected relevance over those rankings that satisfy given representation constraints.
arXiv Detail & Related papers (2023-08-25T08:27:43Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Re-weighting Based Group Fairness Regularization via Classwise Robust
Optimization [30.089819400033985]
We propose a principled method, dubbed as ours, which unifies the two learning schemes by incorporating a well-justified group fairness metric into the training objective.
We develop an iterative optimization algorithm that minimizes the resulting objective by automatically producing the correct re-weights for each group.
Our experiments show that FairDRO is scalable and easily adaptable to diverse applications.
arXiv Detail & Related papers (2023-03-01T12:00:37Z) - Fair and Optimal Classification via Post-Processing [10.163721748735801]
This paper provides a complete characterization of the inherent tradeoff of demographic parity on classification problems.
We show that the minimum error rate achievable by randomized and attribute-aware fair classifiers is given by the optimal value of a Wasserstein-barycenter problem.
arXiv Detail & Related papers (2022-11-03T00:04:04Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - SenSeI: Sensitive Set Invariance for Enforcing Individual Fairness [50.916483212900275]
We first formulate a version of individual fairness that enforces invariance on certain sensitive sets.
We then design a transport-based regularizer that enforces this version of individual fairness and develop an algorithm to minimize the regularizer efficiently.
arXiv Detail & Related papers (2020-06-25T04:31:57Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.