Interpretability is not Explainability: New Quantitative XAI Approach
with a focus on Recommender Systems in Education
- URL: http://arxiv.org/abs/2311.02078v1
- Date: Mon, 18 Sep 2023 11:59:02 GMT
- Title: Interpretability is not Explainability: New Quantitative XAI Approach
with a focus on Recommender Systems in Education
- Authors: Riccardo Porcedda
- Abstract summary: We propose a novel taxonomy that provides a clear and unambiguous understanding of the key concepts and relationships in XAI.
Our approach is rooted in a systematic analysis of existing definitions and frameworks.
This comprehensive taxonomy aims to establish a shared vocabulary for future research.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of eXplainable Artificial Intelligence faces challenges due to the
absence of a widely accepted taxonomy that facilitates the quantitative
evaluation of explainability in Machine Learning algorithms. In this paper, we
propose a novel taxonomy that addresses the current gap in the literature by
providing a clear and unambiguous understanding of the key concepts and
relationships in XAI. Our approach is rooted in a systematic analysis of
existing definitions and frameworks, with a focus on transparency,
interpretability, completeness, complexity and understandability as essential
dimensions of explainability. This comprehensive taxonomy aims to establish a
shared vocabulary for future research. To demonstrate the utility of our
proposed taxonomy, we examine a case study of a Recommender System designed to
curate and recommend the most suitable online resources from MERLOT. By
employing the SHAP package, we quantify and enhance the explainability of the
RS within the context of our newly developed taxonomy.
Related papers
- GIVE: Structured Reasoning with Knowledge Graph Inspired Veracity Extrapolation [108.2008975785364]
Graph Inspired Veracity Extrapolation (GIVE) is a novel reasoning framework that integrates the parametric and non-parametric memories.
Our method facilitates a more logical and step-wise reasoning approach akin to experts' problem-solving, rather than gold answer retrieval.
arXiv Detail & Related papers (2024-10-11T03:05:06Z) - SCENE: Evaluating Explainable AI Techniques Using Soft Counterfactuals [0.0]
This paper introduces SCENE (Soft Counterfactual Evaluation for Natural language Explainability), a novel evaluation method.
By focusing on token-based substitutions, SCENE creates contextually appropriate and semantically meaningful Soft Counterfactuals.
SCENE provides valuable insights into the strengths and limitations of various XAI techniques.
arXiv Detail & Related papers (2024-08-08T16:36:24Z) - FecTek: Enhancing Term Weight in Lexicon-Based Retrieval with Feature Context and Term-level Knowledge [54.61068946420894]
We introduce an innovative method by introducing FEature Context and TErm-level Knowledge modules.
To effectively enrich the feature context representations of term weight, the Feature Context Module (FCM) is introduced.
We also develop a term-level knowledge guidance module (TKGM) for effectively utilizing term-level knowledge to intelligently guide the modeling process of term weight.
arXiv Detail & Related papers (2024-04-18T12:58:36Z) - Sparsity-Guided Holistic Explanation for LLMs with Interpretable
Inference-Time Intervention [53.896974148579346]
Large Language Models (LLMs) have achieved unprecedented breakthroughs in various natural language processing domains.
The enigmatic black-box'' nature of LLMs remains a significant challenge for interpretability, hampering transparent and accountable applications.
We propose a novel methodology anchored in sparsity-guided techniques, aiming to provide a holistic interpretation of LLMs.
arXiv Detail & Related papers (2023-12-22T19:55:58Z) - Concept-based Explainable Artificial Intelligence: A Survey [16.580100294489508]
Using raw features to provide explanations has been disputed in several works lately.
A unified categorization and precise field definition are still missing.
This paper fills the gap by offering a thorough review of C-XAI approaches.
arXiv Detail & Related papers (2023-12-20T11:27:21Z) - AS-XAI: Self-supervised Automatic Semantic Interpretation for CNN [5.42467030980398]
We propose a self-supervised automatic semantic interpretable artificial intelligence (AS-XAI) framework.
It utilizes transparent embedding semantic extraction spaces and row-centered principal component analysis (PCA) for global semantic interpretation of model decisions.
The proposed approach offers broad fine-grained practical applications, including shared semantic interpretation under out-of-distribution categories.
arXiv Detail & Related papers (2023-12-02T10:06:54Z) - Towards Verifiable Generation: A Benchmark for Knowledge-aware Language Model Attribution [48.86322922826514]
This paper defines a new task of Knowledge-aware Language Model Attribution (KaLMA)
First, we extend attribution source from unstructured texts to Knowledge Graph (KG), whose rich structures benefit both the attribution performance and working scenarios.
Second, we propose a new Conscious Incompetence" setting considering the incomplete knowledge repository.
Third, we propose a comprehensive automatic evaluation metric encompassing text quality, citation quality, and text citation alignment.
arXiv Detail & Related papers (2023-10-09T11:45:59Z) - Robust Saliency-Aware Distillation for Few-shot Fine-grained Visual
Recognition [57.08108545219043]
Recognizing novel sub-categories with scarce samples is an essential and challenging research topic in computer vision.
Existing literature addresses this challenge by employing local-based representation approaches.
This article proposes a novel model, Robust Saliency-aware Distillation (RSaD), for few-shot fine-grained visual recognition.
arXiv Detail & Related papers (2023-05-12T00:13:17Z) - Towards the Linear Algebra Based Taxonomy of XAI Explanations [0.0]
Methods of Explainable Artificial Intelligence (XAI) were developed to answer the question why a certain prediction or estimation was made.
XAI proposed in the literature mainly concentrate their attention on distinguishing explanations with respect to involving the human agent.
This paper proposes a simple linear algebra-based taxonomy for local explanations.
arXiv Detail & Related papers (2023-01-30T18:21:27Z) - Integrating Prior Knowledge in Post-hoc Explanations [3.6066164404432883]
Post-hoc interpretability methods aim at explaining to a user the predictions of a trained decision model.
We propose to define a cost function that explicitly integrates prior knowledge into the interpretability objectives.
We propose a new interpretability method called Knowledge Integration in Counterfactual Explanation (KICE) to optimize it.
arXiv Detail & Related papers (2022-04-25T13:09:53Z) - Knowledge Graph Augmented Network Towards Multiview Representation
Learning for Aspect-based Sentiment Analysis [96.53859361560505]
We propose a knowledge graph augmented network (KGAN) to incorporate external knowledge with explicitly syntactic and contextual information.
KGAN captures the sentiment feature representations from multiple perspectives, i.e., context-, syntax- and knowledge-based.
Experiments on three popular ABSA benchmarks demonstrate the effectiveness and robustness of our KGAN.
arXiv Detail & Related papers (2022-01-13T08:25:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.