Detection of Groups with Biased Representation in Ranking
- URL: http://arxiv.org/abs/2301.00719v2
- Date: Fri, 7 Jul 2023 00:13:28 GMT
- Title: Detection of Groups with Biased Representation in Ranking
- Authors: Jinyang Li, Yuval Moskovitch, H. V. Jagadish
- Abstract summary: We study the problem of detecting groups with biased representation in the top-$k$ ranked items.
We propose efficient search algorithms for two different fairness measures.
- Score: 28.095668425175564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Real-life tools for decision-making in many critical domains are based on
ranking results. With the increasing awareness of algorithmic fairness, recent
works have presented measures for fairness in ranking. Many of those
definitions consider the representation of different ``protected groups'', in
the top-$k$ ranked items, for any reasonable $k$. Given the protected groups,
confirming algorithmic fairness is a simple task. However, the groups'
definitions may be unknown in advance. In this paper, we study the problem of
detecting groups with biased representation in the top-$k$ ranked items,
eliminating the need to pre-define protected groups. The number of such groups
possible can be exponential, making the problem hard. We propose efficient
search algorithms for two different fairness measures: global representation
bounds, and proportional representation. Then we propose a method to explain
the bias in the representations of groups utilizing the notion of Shapley
values. We conclude with an experimental study, showing the scalability of our
approach and demonstrating the usefulness of the proposed algorithms.
Related papers
- Finite-Sample and Distribution-Free Fair Classification: Optimal Trade-off Between Excess Risk and Fairness, and the Cost of Group-Blindness [14.421493372559762]
We quantify the impact of enforcing algorithmic fairness and group-blindness in binary classification under group fairness constraints.
We propose a unified framework for fair classification that provides distribution-free and finite-sample fairness guarantees with controlled excess risk.
arXiv Detail & Related papers (2024-10-21T20:04:17Z) - A structured regression approach for evaluating model performance across intersectional subgroups [53.91682617836498]
Disaggregated evaluation is a central task in AI fairness assessment, where the goal is to measure an AI system's performance across different subgroups.
We introduce a structured regression approach to disaggregated evaluation that we demonstrate can yield reliable system performance estimates even for very small subgroups.
arXiv Detail & Related papers (2024-01-26T14:21:45Z) - Evaluating the Fairness of Discriminative Foundation Models in Computer
Vision [51.176061115977774]
We propose a novel taxonomy for bias evaluation of discriminative foundation models, such as Contrastive Language-Pretraining (CLIP)
We then systematically evaluate existing methods for mitigating bias in these models with respect to our taxonomy.
Specifically, we evaluate OpenAI's CLIP and OpenCLIP models for key applications, such as zero-shot classification, image retrieval and image captioning.
arXiv Detail & Related papers (2023-10-18T10:32:39Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Explainable Disparity Compensation for Efficient Fair Ranking [0.3759936323189418]
Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data.
Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees.
In this paper we propose easily explainable data-driven compensatory measures for ranking functions.
arXiv Detail & Related papers (2023-07-25T09:12:50Z) - Towards Auditing Unsupervised Learning Algorithms and Human Processes
For Fairness [39.77120515767369]
Existing work has explored the two group classification problem for binary protected status variables using standard definitions of statistical parity.
Here we build upon the area of auditing by exploring the multi-group setting under more complex definitions of fairness.
arXiv Detail & Related papers (2022-09-20T12:30:05Z) - Fair Group-Shared Representations with Normalizing Flows [68.29997072804537]
We develop a fair representation learning algorithm which is able to map individuals belonging to different groups in a single group.
We show experimentally that our methodology is competitive with other fair representation learning algorithms.
arXiv Detail & Related papers (2022-01-17T10:49:49Z) - Fairness with Overlapping Groups [15.154984899546333]
A standard goal is to ensure the equality of fairness metrics across multiple overlapping groups simultaneously.
We reconsider this standard fair classification problem using a probabilistic population analysis.
Our approach unifies a variety of existing group-fair classification methods and enables extensions to a wide range of non-decomposable multiclass performance metrics and fairness measures.
arXiv Detail & Related papers (2020-06-24T05:01:10Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z) - Robust Optimization for Fairness with Noisy Protected Groups [85.13255550021495]
We study the consequences of naively relying on noisy protected group labels.
We introduce two new approaches using robust optimization.
We show that the robust approaches achieve better true group fairness guarantees than the naive approach.
arXiv Detail & Related papers (2020-02-21T14:58:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.