GPT-based Open-Ended Knowledge Tracing
- URL: http://arxiv.org/abs/2203.03716v4
- Date: Mon, 20 Mar 2023 19:59:30 GMT
- Title: GPT-based Open-Ended Knowledge Tracing
- Authors: Naiming Liu, Zichao Wang, Richard G. Baraniuk, Andrew Lan
- Abstract summary: We study the new task of predicting students' exact open-ended responses to questions.
Our work is grounded in the domain of computer science education with programming questions.
We develop an initial solution to the OKT problem, a student knowledge-guided code generation approach.
- Score: 24.822739021636455
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In education applications, knowledge tracing refers to the problem of
estimating students' time-varying concept/skill mastery level from their past
responses to questions and predicting their future performance. One key
limitation of most existing knowledge tracing methods is that they treat
student responses to questions as binary-valued, i.e., whether they are correct
or incorrect. Response correctness analysis/prediction ignores important
information on student knowledge contained in the exact content of the
responses, especially for open-ended questions. In this paper, we conduct the
first exploration into open-ended knowledge tracing (OKT) by studying the new
task of predicting students' exact open-ended responses to questions. Our work
is grounded in the domain of computer science education with programming
questions. We develop an initial solution to the OKT problem, a student
knowledge-guided code generation approach, that combines program synthesis
methods using language models with student knowledge tracing methods. We also
conduct a series of quantitative and qualitative experiments on a real-world
student code dataset to validate OKT and demonstrate its promise in educational
applications.
Related papers
- Knowledge Tagging System on Math Questions via LLMs with Flexible Demonstration Retriever [48.5585921817745]
Large Language Models (LLMs) are used to automate the knowledge tagging task.
We show the strong performance of zero- and few-shot results over math questions knowledge tagging tasks.
By proposing a reinforcement learning-based demonstration retriever, we successfully exploit the great potential of different-sized LLMs.
arXiv Detail & Related papers (2024-06-19T23:30:01Z) - Explainable Few-shot Knowledge Tracing [48.877979333221326]
We propose a cognition-guided framework that can track the student knowledge from a few student records while providing natural language explanations.
Experimental results from three widely used datasets show that LLMs can perform comparable or superior to competitive deep knowledge tracing methods.
arXiv Detail & Related papers (2024-05-23T10:07:21Z) - Knowledge Tracing Challenge: Optimal Activity Sequencing for Students [0.9814642627359286]
Knowledge tracing is a method used in education to assess and track the acquisition of knowledge by individual learners.
We will present the results of the implementation of two Knowledge Tracing algorithms on a newly released dataset as part of the AAAI2023 Global Knowledge Tracing Challenge.
arXiv Detail & Related papers (2023-11-13T16:28:34Z) - Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Leveraging Skill-to-Skill Supervision for Knowledge Tracing [13.753990664747265]
Knowledge tracing plays a pivotal role in intelligent tutoring systems.
Recent advances in knowledge tracing models have enabled better exploitation of problem solving history.
Knowledge tracing algorithms that incorporate knowledge directly are important to settings with limited data or cold starts.
arXiv Detail & Related papers (2023-06-12T03:23:22Z) - A Unified End-to-End Retriever-Reader Framework for Knowledge-based VQA [67.75989848202343]
This paper presents a unified end-to-end retriever-reader framework towards knowledge-based VQA.
We shed light on the multi-modal implicit knowledge from vision-language pre-training models to mine its potential in knowledge reasoning.
Our scheme is able to not only provide guidance for knowledge retrieval, but also drop these instances potentially error-prone towards question answering.
arXiv Detail & Related papers (2022-06-30T02:35:04Z) - Masked Deep Q-Recommender for Effective Question Scheduling [0.4129225533930965]
Our proposed method first evaluates students' concept-level knowledge using knowledge tracing (KT) model.
Given predicted student knowledge, RL-based recommender predicts the benefits of each question.
With curriculum range restriction and duplicate penalty, the recommender selects questions sequentially until it reaches the predefined number of questions.
arXiv Detail & Related papers (2021-12-19T11:36:01Z) - Option Tracing: Beyond Correctness Analysis in Knowledge Tracing [3.1798318618973362]
We extend existing knowledge tracing methods to predict the exact option students select in multiple choice questions.
We quantitatively evaluate the performance of our option tracing methods on two large-scale student response datasets.
arXiv Detail & Related papers (2021-04-19T04:28:34Z) - Incremental Knowledge Based Question Answering [52.041815783025186]
We propose a new incremental KBQA learning framework that can progressively expand learning capacity as humans do.
Specifically, it comprises a margin-distilled loss and a collaborative selection method, to overcome the catastrophic forgetting problem.
The comprehensive experiments demonstrate its effectiveness and efficiency when working with the evolving knowledge base.
arXiv Detail & Related papers (2021-01-18T09:03:38Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.