ShaRP: Explaining Rankings with Shapley Values
- URL: http://arxiv.org/abs/2401.16744v1
- Date: Tue, 30 Jan 2024 04:48:43 GMT
- Title: ShaRP: Explaining Rankings with Shapley Values
- Authors: Venetia Pliatsika and Joao Fonseca and Tilun Wang and Julia
Stoyanovich
- Abstract summary: We present ShaRP, a framework that explains the contributions of features to different aspects of a ranked outcome.
ShaRP builds on the Quantitative Input Influence framework, and can compute the contributions of features for multiple Quantities of Interest.
- Score: 7.915714424668589
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Algorithmic decisions in critical domains such as hiring, college admissions,
and lending are often based on rankings. Because of the impact these decisions
have on individuals, organizations, and population groups, there is a need to
understand them: to know whether the decisions are abiding by the law, to help
individuals improve their rankings, and to design better ranking procedures.
In this paper, we present ShaRP (Shapley for Rankings and Preferences), a
framework that explains the contributions of features to different aspects of a
ranked outcome, and is based on Shapley values. Using ShaRP, we show that even
when the scoring function used by an algorithmic ranker is known and linear,
the weight of each feature does not correspond to its Shapley value
contribution. The contributions instead depend on the feature distributions,
and on the subtle local interactions between the scoring features. ShaRP builds
on the Quantitative Input Influence framework, and can compute the
contributions of features for multiple Quantities of Interest, including score,
rank, pair-wise preference, and top-k. Because it relies on black-box access to
the ranker, ShaRP can be used to explain both score-based and learned ranking
models. We show results of an extensive experimental validation of ShaRP using
real and synthetic datasets, showcasing its usefulness for qualitative
analysis.
Related papers
- Learning when to rank: Estimation of partial rankings from sparse, noisy comparisons [0.0]
We develop a principled Bayesian methodology for learning partial rankings.
Our framework is adaptable to any statistical ranking method.
It gives a more parsimonious summary of the data than traditional ranking.
arXiv Detail & Related papers (2025-01-05T11:04:30Z) - Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Full Stage Learning to Rank: A Unified Framework for Multi-Stage Systems [40.199257203898846]
We propose an improved ranking principle for multi-stage systems, namely the Generalized Probability Ranking Principle (GPRP)
GPRP emphasizes both the selection bias in each stage of the system pipeline as well as the underlying interest of users.
Our core idea is to first estimate the selection bias in the subsequent stages and then learn a ranking model that best complies with the downstream modules' selection bias.
arXiv Detail & Related papers (2024-05-08T06:35:04Z) - RankSHAP: Shapley Value Based Feature Attributions for Learning to Rank [28.438428292619577]
We adopt an axiomatic game-theoretic approach, popular in the feature attribution community, to identify a set of fundamental axioms that every ranking-based feature attribution method should satisfy.
We then introduce Rank-SHAP, extending classical Shapley values to ranking.
We also perform an axiomatic analysis of existing rank attribution algorithms to determine their compliance with our proposed axioms.
arXiv Detail & Related papers (2024-05-03T04:43:24Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - Bipartite Ranking Fairness through a Model Agnostic Ordering Adjustment [54.179859639868646]
We propose a model agnostic post-processing framework xOrder for achieving fairness in bipartite ranking.
xOrder is compatible with various classification models and ranking fairness metrics, including supervised and unsupervised fairness metrics.
We evaluate our proposed algorithm on four benchmark data sets and two real-world patient electronic health record repositories.
arXiv Detail & Related papers (2023-07-27T07:42:44Z) - Explainable Disparity Compensation for Efficient Fair Ranking [0.3759936323189418]
Ranking functions that are used in decision systems often produce disparate results for different populations because of bias in the underlying data.
Recent compensatory measures have mostly focused on opaque transformations of the ranking functions to satisfy fairness guarantees.
In this paper we propose easily explainable data-driven compensatory measures for ranking functions.
arXiv Detail & Related papers (2023-07-25T09:12:50Z) - Learning List-Level Domain-Invariant Representations for Ranking [59.3544317373004]
We propose list-level alignment -- learning domain-invariant representations at the higher level of lists.
The benefits are twofold: it leads to the first domain adaptation generalization bound for ranking, in turn providing theoretical support for the proposed method.
arXiv Detail & Related papers (2022-12-21T04:49:55Z) - PiRank: Learning To Rank via Differentiable Sorting [85.28916333414145]
We propose PiRank, a new class of differentiable surrogates for ranking.
We show that PiRank exactly recovers the desired metrics in the limit of zero temperature.
arXiv Detail & Related papers (2020-12-12T05:07:36Z) - Towards Model-Agnostic Post-Hoc Adjustment for Balancing Ranking
Fairness and Algorithm Utility [54.179859639868646]
Bipartite ranking aims to learn a scoring function that ranks positive individuals higher than negative ones from labeled data.
There have been rising concerns on whether the learned scoring function can cause systematic disparity across different protected groups.
We propose a model post-processing framework for balancing them in the bipartite ranking scenario.
arXiv Detail & Related papers (2020-06-15T10:08:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.