UKTF: Unified Knowledge Tracing Framework for Subjective and Objective Assessments
- URL: http://arxiv.org/abs/2411.05325v1
- Date: Fri, 08 Nov 2024 04:58:19 GMT
- Title: UKTF: Unified Knowledge Tracing Framework for Subjective and Objective Assessments
- Authors: Zhifeng Wang, Jiaqin Wan, Yang Yang, Chunyan Zeng, Jialiang Shen,
- Abstract summary: Knowledge tracing technology can establish knowledge state models based on learners' historical answer data.
This study proposes a unified knowledge tracing model that integrates both objective and subjective test questions.
- Score: 3.378008889662775
- License:
- Abstract: With the continuous deepening and development of the concept of smart education, learners' comprehensive development and individual needs have received increasing attention. However, traditional educational evaluation systems tend to assess learners' cognitive abilities solely through general test scores, failing to comprehensively consider their actual knowledge states. Knowledge tracing technology can establish knowledge state models based on learners' historical answer data, thereby enabling personalized assessment of learners. Nevertheless, current classical knowledge tracing models are primarily suited for objective test questions, while subjective test questions still confront challenges such as complex data representation, imperfect modeling, and the intricate and dynamic nature of knowledge states. Drawing on the application of knowledge tracing technology in education, this study aims to fully utilize examination data and proposes a unified knowledge tracing model that integrates both objective and subjective test questions. Recognizing the differences in question structure, assessment methods, and data characteristics between objective and subjective test questions, the model employs the same backbone network for training both types of questions. Simultaneously, it achieves knowledge tracing for subjective test questions by universally modifying the training approach of the baseline model, adding branch networks, and optimizing the method of question encoding. This study conducted multiple experiments on real datasets, and the results consistently demonstrate that the model effectively addresses knowledge tracing issues in both objective and subjective test question scenarios.
Related papers
- Heterogeneous Contrastive Learning for Foundation Models and Beyond [73.74745053250619]
In the era of big data and Artificial Intelligence, an emerging paradigm is to utilize contrastive self-supervised learning to model large-scale heterogeneous data.
This survey critically evaluates the current landscape of heterogeneous contrastive learning for foundation models.
arXiv Detail & Related papers (2024-03-30T02:55:49Z) - MoMA: Momentum Contrastive Learning with Multi-head Attention-based
Knowledge Distillation for Histopathology Image Analysis [5.396167537615578]
A lack of quality data is a common issue when it comes to a specific task in computational pathology.
We propose to exploit knowledge distillation, i.e., utilize the existing model to learn a new, target model.
We employ a student-teacher framework to learn a target model from a pre-trained, teacher model without direct access to source data.
arXiv Detail & Related papers (2023-08-31T08:54:59Z) - Quiz-based Knowledge Tracing [61.9152637457605]
Knowledge tracing aims to assess individuals' evolving knowledge states according to their learning interactions.
QKT achieves state-of-the-art performance compared to existing methods.
arXiv Detail & Related papers (2023-04-05T12:48:42Z) - Knowledge-augmented Deep Learning and Its Applications: A Survey [60.221292040710885]
knowledge-augmented deep learning (KADL) aims to identify domain knowledge and integrate it into deep models for data-efficient, generalizable, and interpretable deep learning.
This survey subsumes existing works and offers a bird's-eye view of research in the general area of knowledge-augmented deep learning.
arXiv Detail & Related papers (2022-11-30T03:44:15Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - Quality meets Diversity: A Model-Agnostic Framework for Computerized
Adaptive Testing [60.38182654847399]
Computerized Adaptive Testing (CAT) is emerging as a promising testing application in many scenarios.
We propose a novel framework, namely Model-Agnostic Adaptive Testing (MAAT) for CAT solution.
arXiv Detail & Related papers (2021-01-15T06:48:50Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z) - Zero-Resource Knowledge-Grounded Dialogue Generation [29.357221039484568]
We propose representing the knowledge that bridges a context and a response and the way that the knowledge is expressed as latent variables.
We show that our model can achieve comparable performance with state-of-the-art methods that rely on knowledge-grounded dialogues for training.
arXiv Detail & Related papers (2020-08-29T05:48:32Z) - Assessment Modeling: Fundamental Pre-training Tasks for Interactive
Educational Systems [3.269851859258154]
A common way of circumventing label-scarce problems is pre-training a model to learn representations of the contents of learning items.
We propose Assessment Modeling, a class of fundamental pre-training tasks for general interactive educational systems.
arXiv Detail & Related papers (2020-01-01T02:00:07Z) - What Does My QA Model Know? Devising Controlled Probes using Expert
Knowledge [36.13528043657398]
We investigate whether state-of-the-art QA models have general knowledge about word definitions and general taxonomic reasoning.
We use a methodology for automatically building datasets from various types of expert knowledge.
Our evaluation confirms that transformer-based QA models are already predisposed to recognize certain types of structural lexical knowledge.
arXiv Detail & Related papers (2019-12-31T15:05:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.