Leveraging Skill-to-Skill Supervision for Knowledge Tracing
- URL: http://arxiv.org/abs/2306.06841v1
- Date: Mon, 12 Jun 2023 03:23:22 GMT
- Title: Leveraging Skill-to-Skill Supervision for Knowledge Tracing
- Authors: Hyeondey Kim, Jinwoo Nam, Minjae Lee, Yun Jegal, Kyungwoo Song
- Abstract summary: Knowledge tracing plays a pivotal role in intelligent tutoring systems.
Recent advances in knowledge tracing models have enabled better exploitation of problem solving history.
Knowledge tracing algorithms that incorporate knowledge directly are important to settings with limited data or cold starts.
- Score: 13.753990664747265
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Knowledge tracing plays a pivotal role in intelligent tutoring systems. This
task aims to predict the probability of students answering correctly to
specific questions. To do so, knowledge tracing systems should trace the
knowledge state of the students by utilizing their problem-solving history and
knowledge about the problems. Recent advances in knowledge tracing models have
enabled better exploitation of problem solving history. However, knowledge
about problems has not been studied, as well compared to students' answering
histories. Knowledge tracing algorithms that incorporate knowledge directly are
important to settings with limited data or cold starts. Therefore, we consider
the problem of utilizing skill-to-skill relation to knowledge tracing. In this
work, we introduce expert labeled skill-to-skill relationships. Moreover, we
also provide novel methods to construct a knowledge-tracing model to leverage
human experts' insight regarding relationships between skills. The results of
an extensive experimental analysis show that our method outperformed a baseline
Transformer model. Furthermore, we found that the extent of our model's
superiority was greater in situations with limited data, which allows a smooth
cold start of our model.
Related papers
- Leveraging Pedagogical Theories to Understand Student Learning Process with Graph-based Reasonable Knowledge Tracing [11.082908318943248]
We introduce GRKT, a graph-based reasonable knowledge tracing method to address these issues.
We propose a fine-grained and psychological three-stage modeling process as knowledge retrieval, memory strengthening, and knowledge learning/forgetting.
arXiv Detail & Related papers (2024-06-07T10:14:30Z) - Explainable Few-shot Knowledge Tracing [48.877979333221326]
We propose a cognition-guided framework that can track the student knowledge from a few student records while providing natural language explanations.
Experimental results from three widely used datasets show that LLMs can perform comparable or superior to competitive deep knowledge tracing methods.
arXiv Detail & Related papers (2024-05-23T10:07:21Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Worth of knowledge in deep learning [3.132595571344153]
We present a framework inspired by interpretable machine learning to evaluate the worth of knowledge.
Our findings elucidate the complex relationship between data and knowledge, including dependence, synergistic, and substitution effects.
Our model-agnostic framework can be applied to a variety of common network architectures, providing a comprehensive understanding of the role of prior knowledge in deep learning models.
arXiv Detail & Related papers (2023-07-03T02:25:19Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.