Dynamic Knowledge embedding and tracing
- URL: http://arxiv.org/abs/2005.09109v1
- Date: Mon, 18 May 2020 21:56:42 GMT
- Title: Dynamic Knowledge embedding and tracing
- Authors: Liangbei Xu, Mark A. Davenport
- Abstract summary: We propose a novel approach to knowledge tracing that combines techniques from matrix factorization with recent progress in recurrent neural networks (RNNs)
The proposed emphDynEmb framework enables the tracking of student knowledge even without the concept/skill tag information.
- Score: 18.717482292051788
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The goal of knowledge tracing is to track the state of a student's knowledge
as it evolves over time. This plays a fundamental role in understanding the
learning process and is a key task in the development of an intelligent
tutoring system. In this paper we propose a novel approach to knowledge tracing
that combines techniques from matrix factorization with recent progress in
recurrent neural networks (RNNs) to effectively track the state of a student's
knowledge. The proposed \emph{DynEmb} framework enables the tracking of student
knowledge even without the concept/skill tag information that other knowledge
tracing models require while simultaneously achieving superior performance. We
provide experimental evaluations demonstrating that DynEmb achieves improved
performance compared to baselines and illustrating the robustness and
effectiveness of the proposed framework. We also evaluate our approach using
several real-world datasets showing that the proposed model outperforms the
previous state-of-the-art. These results suggest that combining embedding
models with sequential models such as RNNs is a promising new direction for
knowledge tracing.
Related papers
- Deep Learning Through A Telescoping Lens: A Simple Model Provides Empirical Insights On Grokking, Gradient Boosting & Beyond [61.18736646013446]
In pursuit of a deeper understanding of its surprising behaviors, we investigate the utility of a simple yet accurate model of a trained neural network.
Across three case studies, we illustrate how it can be applied to derive new empirical insights on a diverse range of prominent phenomena.
arXiv Detail & Related papers (2024-10-31T22:54:34Z) - Temporal Graph Memory Networks For Knowledge Tracing [0.40964539027092906]
We propose a novel method that jointly models the relational and temporal dynamics of the knowledge state using a deep temporal graph memory network.
We also propose a generic technique for representing a student's forgetting behavior using temporal decay constraints on the graph memory module.
arXiv Detail & Related papers (2024-09-23T07:47:02Z) - Leveraging Pedagogical Theories to Understand Student Learning Process with Graph-based Reasonable Knowledge Tracing [11.082908318943248]
We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues.
We propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting.
arXiv Detail & Related papers (2024-06-07T10:14:30Z) - A Novel Neural-symbolic System under Statistical Relational Learning [50.747658038910565]
We propose a general bi-level probabilistic graphical reasoning framework called GBPGR.
In GBPGR, the results of symbolic reasoning are utilized to refine and correct the predictions made by the deep learning models.
Our approach achieves high performance and exhibits effective generalization in both transductive and inductive tasks.
arXiv Detail & Related papers (2023-09-16T09:15:37Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - A Unified Continuous Learning Framework for Multi-modal Knowledge
Discovery and Pre-training [73.7507857547549]
We propose to unify knowledge discovery and multi-modal pre-training in a continuous learning framework.
For knowledge discovery, a pre-trained model is used to identify cross-modal links on a graph.
For model pre-training, the knowledge graph is used as the external knowledge to guide the model updating.
arXiv Detail & Related papers (2022-06-11T16:05:06Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - Mixture-of-Variational-Experts for Continual Learning [0.0]
We propose an optimality principle that facilitates a trade-off between learning and forgetting.
We propose a neural network layer for continual learning, called Mixture-of-Variational-Experts (MoVE)
Our experiments on variants of the MNIST and CIFAR10 datasets demonstrate the competitive performance of MoVE layers.
arXiv Detail & Related papers (2021-10-25T06:32:06Z) - Deep Graph Memory Networks for Forgetting-Robust Knowledge Tracing [5.648636668261282]
We propose a novel knowledge tracing model, namely emphDeep Graph Memory Network (DGMN)
In this model, we incorporate a forget gating mechanism into an attention memory structure in order to capture forgetting behaviours.
This model has the capability of learning relationships between latent concepts from a dynamic latent concept graph.
arXiv Detail & Related papers (2021-08-18T12:04:10Z) - Deep Knowledge Tracing with Learning Curves [0.9088303226909278]
We propose a Convolution-Augmented Knowledge Tracing (CAKT) model in this paper.
The model employs three-dimensional convolutional neural networks to explicitly learn a student's recent experience on applying the same knowledge concept with that in the next question.
CAKT achieves the new state-of-the-art performance in predicting students' responses compared with existing models.
arXiv Detail & Related papers (2020-07-26T15:24:51Z) - A Dependency Syntactic Knowledge Augmented Interactive Architecture for
End-to-End Aspect-based Sentiment Analysis [73.74885246830611]
We propose a novel dependency syntactic knowledge augmented interactive architecture with multi-task learning for end-to-end ABSA.
This model is capable of fully exploiting the syntactic knowledge (dependency relations and types) by leveraging a well-designed Dependency Relation Embedded Graph Convolutional Network (DreGcn)
Extensive experimental results on three benchmark datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2020-04-04T14:59:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.