Valid Explanations for Learning to Rank Models
- URL: http://arxiv.org/abs/2004.13972v3
- Date: Sun, 17 May 2020 15:46:57 GMT
- Title: Valid Explanations for Learning to Rank Models
- Authors: Jaspreet Singh, Zhenye Wang, Megha Khosla, and Avishek Anand
- Abstract summary: We propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to a ranking decision.
We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features.
- Score: 5.320400771224103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-to-rank (LTR) is a class of supervised learning techniques that
apply to ranking problems dealing with a large number of features.
The popularity and widespread application of LTR models in prioritizing
information in a variety of domains makes their scrutability vital in today's
landscape of fair and transparent learning systems. However, limited work
exists that deals with interpreting the decisions of learning systems that
output rankings. In this paper we propose a model agnostic local explanation
method that seeks to identify a small subset of input features as explanation
to a ranking decision. We introduce new notions of validity and completeness of
explanations specifically for rankings, based on the presence or absence of
selected features, as a way of measuring goodness. We devise a novel
optimization problem to maximize validity directly and propose greedy
algorithms as solutions. In extensive quantitative experiments we show that our
approach outperforms other model agnostic explanation approaches across
pointwise, pairwise and listwise LTR models in validity while not compromising
on completeness.
Related papers
- Evaluating Human Alignment and Model Faithfulness of LLM Rationale [66.75309523854476]
We study how well large language models (LLMs) explain their generations through rationales.
We show that prompting-based methods are less "faithful" than attribution-based explanations.
arXiv Detail & Related papers (2024-06-28T20:06:30Z) - Enabling Regional Explainability by Automatic and Model-agnostic Rule Extraction [44.23023063715179]
Rule extraction could significantly aid in fields like disease diagnosis, disease progression estimation, or drug discovery.
Existing methods compromise the performance of rules for the minor class to maximise the overall performance.
We propose a model-agnostic approach for extracting rules from specific subgroups of data, featuring automatic rule generation for numerical features.
arXiv Detail & Related papers (2024-06-25T18:47:50Z) - Probing the Decision Boundaries of In-context Learning in Large Language Models [31.977886254197138]
We propose a new mechanism to probe and understand in-context learning from the lens of decision boundaries for in-context binary classification.
To our surprise, we find that the decision boundaries learned by current LLMs in simple binary classification tasks are often irregular and non-smooth.
arXiv Detail & Related papers (2024-06-17T06:00:24Z) - Informed Decision-Making through Advancements in Open Set Recognition and Unknown Sample Detection [0.0]
Open set recognition (OSR) aims to bring classification tasks in a situation that is more like reality.
This study provides an algorithm exploring a new representation of feature space to improve classification in OSR tasks.
arXiv Detail & Related papers (2024-05-09T15:15:34Z) - Learning Fair Ranking Policies via Differentiable Optimization of
Ordered Weighted Averages [55.04219793298687]
This paper shows how efficiently-solvable fair ranking models can be integrated into the training loop of Learning to Rank.
In particular, this paper is the first to show how to backpropagate through constrained optimizations of OWA objectives, enabling their use in integrated prediction and decision models.
arXiv Detail & Related papers (2024-02-07T20:53:53Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Model-Agnostic Explanations using Minimal Forcing Subsets [11.420687735660097]
We propose a new model-agnostic algorithm to identify a minimal set of training samples that are indispensable for a given model's decision.
Our algorithm identifies such a set of "indispensable" samples iteratively by solving a constrained optimization problem.
Results show that our algorithm is an effective and easy-to-comprehend tool that helps to better understand local model behavior.
arXiv Detail & Related papers (2020-11-01T22:45:16Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.