TRIVEA: Transparent Ranking Interpretation using Visual Explanation of
Black-Box Algorithmic Rankers
- URL: http://arxiv.org/abs/2308.14622v1
- Date: Mon, 28 Aug 2023 16:58:44 GMT
- Title: TRIVEA: Transparent Ranking Interpretation using Visual Explanation of
Black-Box Algorithmic Rankers
- Authors: Jun Yuan, Kaustav Bhattacharjee, Akm Zahirul Islam and Aritra Dasgupta
- Abstract summary: Ranking schemes drive many real-world decisions, like, where to study, whom to hire, what to buy, etc.
At the heart of most of these decisions are opaque ranking schemes, which dictate the ordering of data entities.
We aim to enable transparency in ranking interpretation by using algorithmic rankers that learn from available data and by enabling human reasoning about the learned ranking differences using explainable AI (XAI) methods.
- Score: 4.336037935247747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ranking schemes drive many real-world decisions, like, where to study, whom
to hire, what to buy, etc. Many of these decisions often come with high
consequences. For example, a university can be deemed less prestigious if not
featured in a top-k list, and consumers might not even explore products that do
not get recommended to buyers. At the heart of most of these decisions are
opaque ranking schemes, which dictate the ordering of data entities, but their
internal logic is inaccessible or proprietary. Drawing inferences about the
ranking differences is like a guessing game to the stakeholders, like, the
rankees (i.e., the entities who are ranked, like product companies) and the
decision-makers (i.e., who use the rankings, like buyers). In this paper, we
aim to enable transparency in ranking interpretation by using algorithmic
rankers that learn from available data and by enabling human reasoning about
the learned ranking differences using explainable AI (XAI) methods. To realize
this aim, we leverage the exploration-explanation paradigm of human-data
interaction to let human stakeholders explore subsets and groupings of complex
multi-attribute ranking data using visual explanations of model fit and
attribute influence on rankings. We realize this explanation paradigm for
transparent ranking interpretation in TRIVEA, a visual analytic system that is
fueled by: i) visualizations of model fit derived from algorithmic rankers that
learn the associations between attributes and rankings from available data and
ii) visual explanations derived from XAI methods that help abstract important
patterns, like, the relative influence of attributes in different ranking
ranges. Using TRIVEA, end users not trained in data science have the agency to
transparently reason about the global and local behavior of the rankings
without the need to open black-box ranking models and develop confidence in the
resulting attribute-based inferences. We demonstrate the efficacy of TRIVEA
using multiple usage scenarios and subjective feedback from researchers with
diverse domain expertise. Keywords: Visual Analytics, Learning-to-Rank,
Explainable ML, Ranking
Related papers
- Is Interpretable Machine Learning Effective at Feature Selection for Neural Learning-to-Rank? [15.757181795925336]
Neural ranking models have become increasingly popular for real-world search and recommendation systems.
Unlike their tree-based counterparts, neural models are much less interpretable.
This is particularly disadvantageous since interpretability is highly important for real-world systems.
arXiv Detail & Related papers (2024-05-13T14:26:29Z) - XAL: EXplainable Active Learning Makes Classifiers Better Low-resource Learners [71.8257151788923]
We propose a novel Explainable Active Learning framework (XAL) for low-resource text classification.
XAL encourages classifiers to justify their inferences and delve into unlabeled data for which they cannot provide reasonable explanations.
Experiments on six datasets show that XAL achieves consistent improvement over 9 strong baselines.
arXiv Detail & Related papers (2023-10-09T08:07:04Z) - D-BIAS: A Causality-Based Human-in-the-Loop System for Tackling
Algorithmic Bias [57.87117733071416]
We propose D-BIAS, a visual interactive tool that embodies human-in-the-loop AI approach for auditing and mitigating social biases.
A user can detect the presence of bias against a group by identifying unfair causal relationships in the causal network.
For each interaction, say weakening/deleting a biased causal edge, the system uses a novel method to simulate a new (debiased) dataset.
arXiv Detail & Related papers (2022-08-10T03:41:48Z) - Re-Examining Human Annotations for Interpretable NLP [80.81532239566992]
We conduct controlled experiments using crowd-sourced websites on two widely used datasets in Interpretable NLP.
We compare the annotation results obtained from recruiting workers satisfying different levels of qualification.
Our results reveal that the annotation quality is highly subject to the workers' qualification, and workers can be guided to provide certain annotations by the instructions.
arXiv Detail & Related papers (2022-04-10T02:27:30Z) - MINER: Improving Out-of-Vocabulary Named Entity Recognition from an
Information Theoretic Perspective [57.19660234992812]
NER model has achieved promising performance on standard NER benchmarks.
Recent studies show that previous approaches may over-rely on entity mention information, resulting in poor performance on out-of-vocabulary (OOV) entity recognition.
We propose MINER, a novel NER learning framework, to remedy this issue from an information-theoretic perspective.
arXiv Detail & Related papers (2022-04-09T05:18:20Z) - Integrating Rankings into Quantized Scores in Peer Review [61.27794774537103]
In peer review, reviewers are usually asked to provide scores for the papers.
To mitigate this issue, conferences have started to ask reviewers to additionally provide a ranking of the papers they have reviewed.
There are no standard procedure for using this ranking information and Area Chairs may use it in different ways.
We take a principled approach to integrate the ranking information into the scores.
arXiv Detail & Related papers (2022-04-05T19:39:13Z) - Data Driven and Visualization based Strategization for University Rank
Improvement using Decision Trees [1.933681537640272]
We present a novel idea of classifying the rankings data using Decision Tree (DT) based algorithms and retrieve decision paths for rank improvement using data visualization techniques.
The proposed methodology can aid HEIs to quantitatively asses the scope of improvement, adumbrate a fine-grained long-term action plan and prepare a suitable road-map.
arXiv Detail & Related papers (2021-10-18T06:41:45Z) - Analysis of Multivariate Scoring Functions for Automatic Unbiased
Learning to Rank [14.827143632277274]
AutoULTR algorithms that jointly learn user bias models (i.e., propensity models) with unbiased rankers have received a lot of attention due to their superior performance and low deployment cost in practice.
Recent advances in context-aware learning-to-rank models have shown that multivariate scoring functions, which read multiple documents together and predict their ranking scores jointly, are more powerful than uni-variate ranking functions in ranking tasks with human-annotated relevance labels.
Our experiments with synthetic clicks on two large-scale benchmark datasets show that AutoULTR models with permutation-invariant multivariate scoring functions significantly outperform
arXiv Detail & Related papers (2020-08-20T16:31:59Z) - Controlling Fairness and Bias in Dynamic Learning-to-Rank [31.41843594914603]
We propose a learning algorithm that ensures notions of amortized group fairness, while simultaneously learning the ranking function from implicit feedback data.
The algorithm takes the form of a controller that integrates unbiased estimators for both fairness and utility.
In addition to its rigorous theoretical foundation and convergence guarantees, we find empirically that the algorithm is highly practical and robust.
arXiv Detail & Related papers (2020-05-29T17:57:56Z) - Valid Explanations for Learning to Rank Models [5.320400771224103]
We propose a model agnostic local explanation method that seeks to identify a small subset of input features as explanation to a ranking decision.
We introduce new notions of validity and completeness of explanations specifically for rankings, based on the presence or absence of selected features.
arXiv Detail & Related papers (2020-04-29T06:21:56Z) - Bias in Multimodal AI: Testbed for Fair Automatic Recruitment [73.85525896663371]
We study how current multimodal algorithms based on heterogeneous sources of information are affected by sensitive elements and inner biases in the data.
We train automatic recruitment algorithms using a set of multimodal synthetic profiles consciously scored with gender and racial biases.
Our methodology and results show how to generate fairer AI-based tools in general, and in particular fairer automated recruitment systems.
arXiv Detail & Related papers (2020-04-15T15:58:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.