A Survey of Explainable Knowledge Tracing
- URL: http://arxiv.org/abs/2403.07279v1
- Date: Tue, 12 Mar 2024 03:17:59 GMT
- Title: A Survey of Explainable Knowledge Tracing
- Authors: Yanhong Bai, Jiabao Zhao, Tingjiang Wei, Qing Cai, Liang He
- Abstract summary: This paper thoroughly analyzes the interpretability of KT algorithms.
Current evaluation methods for explainable knowledge tracing are lacking.
This paper offers some insights into evaluation methods from the perspective of educational stakeholders.
- Score: 14.472784840283099
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the long term accumulation of high quality educational data, artificial
intelligence has shown excellent performance in knowledge tracing. However, due
to the lack of interpretability and transparency of some algorithms, this
approach will result in reduced stakeholder trust and a decreased acceptance of
intelligent decisions. Therefore, algorithms need to achieve high accuracy, and
users need to understand the internal operating mechanism and provide reliable
explanations for decisions. This paper thoroughly analyzes the interpretability
of KT algorithms. First, the concepts and common methods of explainable
artificial intelligence and knowledge tracing are introduced. Next, explainable
knowledge tracing models are classified into two categories: transparent models
and black box models. Then, the interpretable methods used are reviewed from
three stages: ante hoc interpretable methods, post hoc interpretable methods,
and other dimensions. It is worth noting that current evaluation methods for
explainable knowledge tracing are lacking. Hence, contrast and deletion
experiments are conducted to explain the prediction results of the deep
knowledge tracing model on the ASSISTment2009 by using three XAI methods.
Moreover, this paper offers some insights into evaluation methods from the
perspective of educational stakeholders. This paper provides a detailed and
comprehensive review of the research on explainable knowledge tracing, aiming
to offer some basis and inspiration for researchers interested in the
interpretability of knowledge tracing.
Related papers
- Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Evaluation Metrics for Symbolic Knowledge Extracted from Machine
Learning Black Boxes: A Discussion Paper [0.0]
How to assess the level of readability of the extracted knowledge quantitatively is still an open issue.
Finding such a metric would be the key, for instance, to enable automatic comparison between a set of different knowledge representations.
arXiv Detail & Related papers (2022-11-01T03:04:25Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Developing a Fidelity Evaluation Approach for Interpretable Machine
Learning [2.2448567386846916]
Explainable AI (XAI) methods are used to improve the interpretability of complex models.
In particular, methods to evaluate the fidelity of the explanation to the underlying black box require further development.
Our evaluations suggest that the internal mechanism of the underlying predictive model, the internal mechanism of the explainable method used and model and data complexity all affect explanation fidelity.
arXiv Detail & Related papers (2021-06-16T00:21:16Z) - On the Objective Evaluation of Post Hoc Explainers [10.981508361941335]
Modern trends in machine learning research have led to algorithms that are increasingly intricate to the degree that they are considered to be black boxes.
In an effort to reduce the opacity of decisions, methods have been proposed to construe the inner workings of such models in a human-comprehensible manner.
We propose a framework for the evaluation of post hoc explainers on ground truth that is directly derived from the additive structure of a model.
arXiv Detail & Related papers (2021-06-15T19:06:51Z) - Evaluating Explainable Artificial Intelligence Methods for Multi-label
Deep Learning Classification Tasks in Remote Sensing [0.0]
We develop deep learning models with state-of-the-art performance in benchmark datasets.
Ten XAI methods were employed towards understanding and interpreting models' predictions.
Occlusion, Grad-CAM and Lime were the most interpretable and reliable XAI methods.
arXiv Detail & Related papers (2021-04-03T11:13:14Z) - Interpretable Deep Learning: Interpretations, Interpretability,
Trustworthiness, and Beyond [49.93153180169685]
We introduce and clarify two basic concepts-interpretations and interpretability-that people usually get confused.
We elaborate the design of several recent interpretation algorithms, from different perspectives, through proposing a new taxonomy.
We summarize the existing work in evaluating models' interpretability using "trustworthy" interpretation algorithms.
arXiv Detail & Related papers (2021-03-19T08:40:30Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - SIDU: Similarity Difference and Uniqueness Method for Explainable AI [21.94600656231124]
This paper presents a novel visual explanation method for deep learning networks in the form of a saliency map.
The proposed method shows quite promising visual explanations that can gain greater trust of human expert.
arXiv Detail & Related papers (2020-06-04T20:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.