Scene Text Recognition Models Explainability Using Local Features
- URL: http://arxiv.org/abs/2310.09549v1
- Date: Sat, 14 Oct 2023 10:01:52 GMT
- Title: Scene Text Recognition Models Explainability Using Local Features
- Authors: Mark Vincent Ty, Rowel Atienza
- Abstract summary: Scene Text Recognition (STR) Explainability is the study on how humans can understand the cause of a model's prediction.
Recent XAI literatures on STR only provide a simple analysis and do not fully explore other XAI methods.
We specifically work on data explainability frameworks, called attribution-based methods, that explain the important parts of an input data in deep learning models.
We propose a new method, STRExp, to take into consideration the local explanations, i.e. the individual character prediction explanations.
- Score: 11.990881697492078
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Explainable AI (XAI) is the study on how humans can be able to understand the
cause of a model's prediction. In this work, the problem of interest is Scene
Text Recognition (STR) Explainability, using XAI to understand the cause of an
STR model's prediction. Recent XAI literatures on STR only provide a simple
analysis and do not fully explore other XAI methods. In this study, we
specifically work on data explainability frameworks, called attribution-based
methods, that explain the important parts of an input data in deep learning
models. However, integrating them into STR produces inconsistent and
ineffective explanations, because they only explain the model in the global
context. To solve this problem, we propose a new method, STRExp, to take into
consideration the local explanations, i.e. the individual character prediction
explanations. This is then benchmarked across different attribution-based
methods on different STR datasets and evaluated across different STR models.
Related papers
- Analyzing the Influence of Training Samples on Explanations [5.695152528716705]
We propose a novel problem of identifying training data samples that have a high influence on a given explanation.
For this, we propose an algorithm that identifies such influential training samples.
arXiv Detail & Related papers (2024-06-05T07:20:06Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Explainability for Large Language Models: A Survey [59.67574757137078]
Large language models (LLMs) have demonstrated impressive capabilities in natural language processing.
This paper introduces a taxonomy of explainability techniques and provides a structured overview of methods for explaining Transformer-based language models.
arXiv Detail & Related papers (2023-09-02T22:14:26Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Understanding Post-hoc Explainers: The Case of Anchors [6.681943980068051]
We present a theoretical analysis of a rule-based interpretability method that highlights a small set of words to explain a text's decision.
After formalizing its algorithm and providing useful insights, we demonstrate mathematically that Anchors produces meaningful results.
arXiv Detail & Related papers (2023-03-15T17:56:34Z) - Explanations Based on Item Response Theory (eXirt): A Model-Specific Method to Explain Tree-Ensemble Model in Trust Perspective [0.4749981032986242]
Methods such as Ciu, Dalex, Eli5, Lofo, Shap and Skater emerged to explain black box models.
Xirt is able to generate global explanations of tree-ensemble models and also local explanations of instances of models through IRT.
arXiv Detail & Related papers (2022-10-18T15:30:14Z) - Discrete Reasoning Templates for Natural Language Understanding [79.07883990966077]
We present an approach that reasons about complex questions by decomposing them to simpler subquestions.
We derive the final answer according to instructions in a predefined reasoning template.
We show that our approach is competitive with the state-of-the-art while being interpretable and requires little supervision.
arXiv Detail & Related papers (2021-04-05T18:56:56Z) - Data Representing Ground-Truth Explanations to Evaluate XAI Methods [0.0]
Explainable artificial intelligence (XAI) methods are currently evaluated with approaches mostly originated in interpretable machine learning (IML) research.
We propose to represent explanations with canonical equations that can be used to evaluate the accuracy of XAI methods.
arXiv Detail & Related papers (2020-11-18T16:54:53Z) - Interpretable Multi-dataset Evaluation for Named Entity Recognition [110.64368106131062]
We present a general methodology for interpretable evaluation for the named entity recognition (NER) task.
The proposed evaluation method enables us to interpret the differences in models and datasets, as well as the interplay between them.
By making our analysis tool available, we make it easy for future researchers to run similar analyses and drive progress in this area.
arXiv Detail & Related papers (2020-11-13T10:53:27Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Ontology-based Interpretable Machine Learning for Textual Data [35.01650633374998]
We introduce a novel interpreting framework that learns an interpretable model based on sampling technique to explain prediction models.
To narrow down the search space for explanations, we design a learnable anchor algorithm.
A set of regulations is further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible explanations.
arXiv Detail & Related papers (2020-04-01T02:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.