A novel post-hoc explanation comparison metric and applications
- URL: http://arxiv.org/abs/2311.10811v1
- Date: Fri, 17 Nov 2023 18:35:13 GMT
- Title: A novel post-hoc explanation comparison metric and applications
- Authors: Shreyan Mitra and Leilani Gilpin
- Abstract summary: Explanatory systems make the behavior of machine learning models more transparent, but are often inconsistent.
This paper presents the Shreyan Distance, a novel metric based on the weighted difference between ranked feature importance lists produced by such systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explanatory systems make the behavior of machine learning models more
transparent, but are often inconsistent. To quantify the differences between
explanatory systems, this paper presents the Shreyan Distance, a novel metric
based on the weighted difference between ranked feature importance lists
produced by such systems. This paper uses the Shreyan Distance to compare two
explanatory systems, SHAP and LIME, for both regression and classification
learning tasks. Because we find that the average Shreyan Distance varies
significantly between these two tasks, we conclude that consistency between
explainers not only depends on inherent properties of the explainers
themselves, but also the type of learning task. This paper further contributes
the XAISuite library, which integrates the Shreyan distance algorithm into
machine learning pipelines.
Related papers
- REBAR: Retrieval-Based Reconstruction for Time-series Contrastive Learning [64.08293076551601]
We propose a novel method of using a learned measure for identifying positive pairs.
Our Retrieval-Based Reconstruction measure measures the similarity between two sequences.
We show that the REBAR error is a predictor of mutual class membership.
arXiv Detail & Related papers (2023-11-01T13:44:45Z) - Interpretable Differencing of Machine Learning Models [20.99877540751412]
We formalize the problem of model differencing as one of predicting a dissimilarity function of two ML models' outputs.
A Joint Surrogate Tree (JST) is composed of two conjoined decision tree surrogates for the two models.
A JST provides an intuitive representation of differences and places the changes in the context of the models' decision logic.
arXiv Detail & Related papers (2023-06-10T16:15:55Z) - SuSana Distancia is all you need: Enforcing class separability in metric
learning via two novel distance-based loss functions for few-shot image
classification [0.9236074230806579]
We propose two loss functions which consider the importance of the embedding vectors by looking at the intra-class and inter-class distance between the few data.
Our results show a significant improvement in accuracy in the miniImagenNet benchmark compared to other metric-based few-shot learning methods by a margin of 2%.
arXiv Detail & Related papers (2023-05-15T23:12:09Z) - The XAISuite framework and the implications of explanatory system
dissonance [0.0]
This paper compares two explanatory systems, SHAP and LIME, based on the correlation of their respective importance scores.
The magnitude of importance is not significant in explanation consistency.
The similarity between SHAP and LIME importance scores cannot predict model accuracy.
arXiv Detail & Related papers (2023-04-15T04:40:03Z) - Comparing Feature Importance and Rule Extraction for Interpretability on
Text Data [7.893831644671976]
We show that using different methods can lead to unexpectedly different explanations.
To quantify this effect, we propose a new approach to compare explanations produced by different methods.
arXiv Detail & Related papers (2022-07-04T13:54:55Z) - Adaptive Hierarchical Similarity Metric Learning with Noisy Labels [138.41576366096137]
We propose an Adaptive Hierarchical Similarity Metric Learning method.
It considers two noise-insensitive information, textiti.e., class-wise divergence and sample-wise consistency.
Our method achieves state-of-the-art performance compared with current deep metric learning approaches.
arXiv Detail & Related papers (2021-10-29T02:12:18Z) - Rethinking Deep Contrastive Learning with Embedding Memory [58.66613563148031]
Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML)
We provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory.
arXiv Detail & Related papers (2021-03-25T17:39:34Z) - A Taxonomy of Similarity Metrics for Markov Decision Processes [62.997667081978825]
In recent years, transfer learning has succeeded in making Reinforcement Learning (RL) algorithms more efficient.
In this paper, we propose a categorization of these metrics and analyze the definitions of similarity proposed so far.
arXiv Detail & Related papers (2021-03-08T12:36:42Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - Document Modeling with Graph Attention Networks for Multi-grained
Machine Reading Comprehension [127.3341842928421]
Natural Questions is a new challenging machine reading comprehension benchmark.
It has two-grained answers, which are a long answer (typically a paragraph) and a short answer (one or more entities inside the long answer)
Existing methods treat these two sub-tasks individually during training while ignoring their dependencies.
We present a novel multi-grained machine reading comprehension framework that focuses on modeling documents at their hierarchical nature.
arXiv Detail & Related papers (2020-05-12T14:20:09Z) - Building and Interpreting Deep Similarity Models [0.0]
We propose to make similarities interpretable by augmenting them with an explanation in terms of input features.
We develop BiLRP, a scalable and theoretically founded method to systematically decompose similarity scores on pairs of input features.
arXiv Detail & Related papers (2020-03-11T17:46:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.