Ontology-based Interpretable Machine Learning for Textual Data
- URL: http://arxiv.org/abs/2004.00204v1
- Date: Wed, 1 Apr 2020 02:51:57 GMT
- Title: Ontology-based Interpretable Machine Learning for Textual Data
- Authors: Phung Lai, NhatHai Phan, Han Hu, Anuja Badeti, David Newman, Dejing
Dou
- Abstract summary: We introduce a novel interpreting framework that learns an interpretable model based on sampling technique to explain prediction models.
To narrow down the search space for explanations, we design a learnable anchor algorithm.
A set of regulations is further introduced, regarding combining learned interpretable representations with anchors to generate comprehensible explanations.
- Score: 35.01650633374998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we introduce a novel interpreting framework that learns an
interpretable model based on an ontology-based sampling technique to explain
agnostic prediction models. Different from existing approaches, our algorithm
considers contextual correlation among words, described in domain knowledge
ontologies, to generate semantic explanations. To narrow down the search space
for explanations, which is a major problem of long and complicated text data,
we design a learnable anchor algorithm, to better extract explanations locally.
A set of regulations is further introduced, regarding combining learned
interpretable representations with anchors to generate comprehensible semantic
explanations. An extensive experiment conducted on two real-world datasets
shows that our approach generates more precise and insightful explanations
compared with baseline approaches.
Related papers
- Interpreting Inflammation Prediction Model via Tag-based Cohort Explanation [5.356481722174994]
We propose a novel framework for identifying cohorts within a dataset based on local feature importance scores.
We evaluate our framework on a food-based inflammation prediction model and demonstrated that the framework can generate reliable explanations that match domain knowledge.
arXiv Detail & Related papers (2024-10-17T23:22:59Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Unsupervised Interpretable Basis Extraction for Concept-Based Visual
Explanations [53.973055975918655]
We show that, intermediate layer representations become more interpretable when transformed to the bases extracted with our method.
We compare the bases extracted with our method with the bases derived with a supervised approach and find that, in one aspect, the proposed unsupervised approach has a strength that constitutes a limitation of the supervised one and give potential directions for future research.
arXiv Detail & Related papers (2023-03-19T00:37:19Z) - Interpretable Deep Learning: Interpretations, Interpretability,
Trustworthiness, and Beyond [49.93153180169685]
We introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused.
We elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy.
We summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms.
arXiv Detail & Related papers (2021-03-19T08:40:30Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Probably Approximately Correct Explanations of Machine Learning Models
via Syntax-Guided Synthesis [6.624726878647541]
We propose a novel approach to understanding the decision making of complex machine learning models (e.g., deep neural networks) using a combination of probably approximately correct learning (PAC) and a logic inference methodology called syntax-guided synthesis (SyGuS)
We prove that our framework produces explanations that with a high probability make only few errors and show empirically that it is effective in generating small, human-interpretable explanations.
arXiv Detail & Related papers (2020-09-18T12:10:49Z) - How Far are We from Effective Context Modeling? An Exploratory Study on
Semantic Parsing in Context [59.13515950353125]
We present a grammar-based decoding semantic parsing and adapt typical context modeling methods on top of it.
We evaluate 13 context modeling methods on two large cross-domain datasets, and our best model achieves state-of-the-art performances.
arXiv Detail & Related papers (2020-02-03T11:28:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.