MANI-Rank: Multiple Attribute and Intersectional Group Fairness for
Consensus Ranking
- URL: http://arxiv.org/abs/2207.10020v1
- Date: Wed, 20 Jul 2022 16:36:20 GMT
- Title: MANI-Rank: Multiple Attribute and Intersectional Group Fairness for
Consensus Ranking
- Authors: Kathleen Cachel, Elke Rundensteiner, and Lane Harrison
- Abstract summary: Group fairness in rankings and in particular rank aggregation remains in its infancy.
Recent work introduced the concept of fair rank aggregation for combining rankings but restricted to the case when candidates have a single binary protected attribute.
Yet it remains an open problem how to create a consensus ranking that represents the preferences of all rankers.
We are the first to define and solve this open Multi-attribute Fair Consensus Ranking problem.
- Score: 6.231376714841276
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Combining the preferences of many rankers into one single consensus ranking
is critical for consequential applications from hiring and admissions to
lending. While group fairness has been extensively studied for classification,
group fairness in rankings and in particular rank aggregation remains in its
infancy. Recent work introduced the concept of fair rank aggregation for
combining rankings but restricted to the case when candidates have a single
binary protected attribute, i.e., they fall into two groups only. Yet it
remains an open problem how to create a consensus ranking that represents the
preferences of all rankers while ensuring fair treatment for candidates with
multiple protected attributes such as gender, race, and nationality. In this
work, we are the first to define and solve this open Multi-attribute Fair
Consensus Ranking (MFCR) problem. As a foundation, we design novel group
fairness criteria for rankings, called MANI-RANK, ensuring fair treatment of
groups defined by individual protected attributes and their intersection.
Leveraging the MANI-RANK criteria, we develop a series of algorithms that for
the first time tackle the MFCR problem. Our experimental study with a rich
variety of consensus scenarios demonstrates our MFCR methodology is the only
approach to achieve both intersectional and protected attribute fairness while
also representing the preferences expressed through many base rankings. Our
real-world case study on merit scholarships illustrates the effectiveness of
our MFCR methods to mitigate bias across multiple protected attributes and
their intersections. This is an extended version of "MANI-Rank: Multiple
Attribute and Intersectional Group Fairness for Consensus Ranking", to appear
in ICDE 2022.
Related papers
- Fairness in Ranking: Robustness through Randomization without the Protected Attribute [15.086941303164375]
We propose a randomized method for post-processing rankings, which do not require the availability of the protected attribute.
In an extensive numerical study, we show the robustness of our methods with respect to P-Fairness and effectiveness with respect to Normalized Discounted Cumulative Gain (NDCG) from the baseline ranking, improving on previously proposed methods.
arXiv Detail & Related papers (2024-03-28T13:50:24Z) - Measuring Bias in a Ranked List using Term-based Representations [50.69722973236967]
We propose a novel metric called TExFAIR (term exposure-based fairness)
TExFAIR measures fairness based on the term-based representation of groups in a ranked list.
Our experiments show that there is no strong correlation between TExFAIR and NFaiRR, which indicates that TExFAIR measures a different dimension of fairness than NFaiRR.
arXiv Detail & Related papers (2024-03-09T18:24:58Z) - Stability and Multigroup Fairness in Ranking with Uncertain Predictions [61.76378420347408]
Our work considers ranking functions: maps from individual predictions for a classification task to distributions over rankings.
We focus on two aspects of ranking functions: stability to perturbations in predictions and fairness towards both individuals and subgroups.
Our work demonstrates that uncertainty aware rankings naturally interpolate between group and individual level fairness guarantees.
arXiv Detail & Related papers (2024-02-14T17:17:05Z) - Fairness in Ranking under Disparate Uncertainty [24.401219403555814]
We argue that ranking can introduce unfairness if the uncertainty of the underlying relevance model differs between groups of options.
We propose Equal-Opportunity Ranking (EOR) as a new fairness criterion for ranking.
We show that EOR corresponds to a group-wise fair lottery among the relevant options even in the presence of disparate uncertainty.
arXiv Detail & Related papers (2023-09-04T13:49:48Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Detection of Groups with Biased Representation in Ranking [28.095668425175564]
We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
arXiv Detail & Related papers (2022-12-30T10:50:02Z) - MultiFair: Multi-Group Fairness in Machine Learning [52.24956510371455]
We study multi-group fairness in machine learning (MultiFair)
We propose a generic end-to-end algorithmic framework to solve it.
Our proposed framework is generalizable to many different settings.
arXiv Detail & Related papers (2021-05-24T02:30:22Z) - One-vs.-One Mitigation of Intersectional Bias: A General Method to
Extend Fairness-Aware Binary Classification [0.48733623015338234]
One-vs.-One Mitigation is a process of comparison between each pair of subgroups related to sensitive attributes to the fairness-aware machine learning for binary classification.
Our method mitigates the intersectional bias much better than conventional methods in all the settings.
arXiv Detail & Related papers (2020-10-26T11:35:39Z) - Distributional Individual Fairness in Clustering [7.303841123034983]
We introduce a framework for assigning individuals, embedded in a metric space, to probability distributions over a bounded number of cluster centers.
We provide an algorithm for clustering with $p$-norm objective and individual fairness constraints with provable approximation guarantee.
arXiv Detail & Related papers (2020-06-22T20:02:09Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.