Modeling Knowledge Acquisition from Multiple Learning Resource Types
- URL: http://arxiv.org/abs/2006.13390v2
- Date: Tue, 30 Jun 2020 21:36:50 GMT
- Title: Modeling Knowledge Acquisition from Multiple Learning Resource Types
- Authors: Siqian Zhao, Chunpai Wang, Shaghayegh Sahebi
- Abstract summary: Students acquire knowledge as they interact with a variety of learning materials, such as video lectures, problems, and discussions.
Current student knowledge modeling techniques mostly rely on one type of learning material, mainly problems, to model student knowledge growth.
We propose a student knowledge model that can capture knowledge growth as a result of learning from a diverse set of learning resource types.
- Score: 2.9778695679660188
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Students acquire knowledge as they interact with a variety of learning
materials, such as video lectures, problems, and discussions. Modeling student
knowledge at each point during their learning period and understanding the
contribution of each learning material to student knowledge are essential for
detecting students' knowledge gaps and recommending learning materials to them.
Current student knowledge modeling techniques mostly rely on one type of
learning material, mainly problems, to model student knowledge growth. These
approaches ignore the fact that students also learn from other types of
material. In this paper, we propose a student knowledge model that can capture
knowledge growth as a result of learning from a diverse set of learning
resource types while unveiling the association between the learning materials
of different types. Our multi-view knowledge model (MVKM) incorporates a
flexible knowledge increase objective on top of a multi-view tensor
factorization to capture occasional forgetting while representing student
knowledge and learning material concepts in a lower-dimensional latent space.
We evaluate our model in different experiments toshow that it can accurately
predict students' future performance, differentiate between knowledge gain in
different student groups and concepts, and unveil hidden similarities across
learning materials of different types.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Recognizing Unseen Objects via Multimodal Intensive Knowledge Graph
Propagation [68.13453771001522]
We propose a multimodal intensive ZSL framework that matches regions of images with corresponding semantic embeddings.
We conduct extensive experiments and evaluate our model on large-scale real-world data.
arXiv Detail & Related papers (2023-06-14T13:07:48Z) - Transition-Aware Multi-Activity Knowledge Tracing [2.9778695679660188]
Knowledge tracing aims to model student knowledge state given the student's sequence of learning activities.
Current KT solutions are not fit for modeling student learning from non-assessed learning activities.
We propose Transition-Aware Multi-activity Knowledge Tracing (TAMKOT)
arXiv Detail & Related papers (2023-01-26T21:49:24Z) - The KITMUS Test: Evaluating Knowledge Integration from Multiple Sources
in Natural Language Understanding Systems [87.3207729953778]
We evaluate state-of-the-art coreference resolution models on our dataset.
Several models struggle to reason on-the-fly over knowledge observed both at pretrain time and at inference time.
Still, even the best performing models seem to have difficulties with reliably integrating knowledge presented only at inference time.
arXiv Detail & Related papers (2022-12-15T23:26:54Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Dynamic Diagnosis of the Progress and Shortcomings of Student Learning
using Machine Learning based on Cognitive, Social, and Emotional Features [0.06999740786886534]
Student diversity can be challenging as it adds variability in the way in which students learn and progress over time.
A single teaching approach is likely to be ineffective and result in students not meeting their potential.
This paper discusses a novel methodology based on data analytics and Machine Learning to measure and causally diagnose the progress and shortcomings of student learning.
arXiv Detail & Related papers (2022-04-13T21:14:58Z) - Learning Data Teaching Strategies Via Knowledge Tracing [5.648636668261282]
We propose a novel method, called Knowledge Augmented Data Teaching (KADT), to optimize a data teaching strategy for a student model.
The KADT method incorporates a knowledge tracing model to dynamically capture the knowledge progress of a student model in terms of latent learning concepts.
We have evaluated the performance of the KADT method on four different machine learning tasks including knowledge tracing, sentiment analysis, movie recommendation, and image classification.
arXiv Detail & Related papers (2021-11-13T10:10:48Z) - A Network Science Perspective to Personalized Learning [0.0]
We examine how learning objectives can be achieved through a learning platform that offers content choices and multiple modalities of engagement to support self-paced learning.
This framework brings the attention to learning experiences, rather than teaching experiences, by providing the learner engagement and content choices supported by a network of knowledge.
arXiv Detail & Related papers (2021-11-02T01:50:01Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.