Faster, Cheaper, More Accurate: Specialised Knowledge Tracing Models Outperform LLMs
- URL: http://arxiv.org/abs/2603.02830v1
- Date: Tue, 03 Mar 2026 10:25:52 GMT
- Title: Faster, Cheaper, More Accurate: Specialised Knowledge Tracing Models Outperform LLMs
- Authors: Prarthana Bhattacharyya, Joshua Mitton, Ralph Abboud, Simon Woodhead,
- Abstract summary: Knowledge tracing (KT) models are small, domain-specific, temporal models trained on student question-response data.<n>We show that KT models outperform Large Language Models (LLMs) with respect to accuracy and F1 scores on this domain-specific task.<n>This highlights the importance of domain-specific models for education prediction tasks.
- Score: 3.8834950760134657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Predicting future student responses to questions is particularly valuable for educational learning platforms where it enables effective interventions. One of the key approaches to do this has been through the use of knowledge tracing (KT) models. These are small, domain-specific, temporal models trained on student question-response data. KT models are optimised for high accuracy on specific educational domains and have fast inference and scalable deployments. The rise of Large Language Models (LLMs) motivates us to ask the following questions: (1) How well can LLMs perform at predicting students' future responses to questions? (2) Are LLMs scalable for this domain? (3) How do LLMs compare to KT models on this domain-specific task? In this paper, we compare multiple LLMs and KT models across predictive performance, deployment cost, and inference speed to answer the above questions. We show that KT models outperform LLMs with respect to accuracy and F1 scores on this domain-specific task. Further, we demonstrate that LLMs are orders of magnitude slower than KT models and cost orders of magnitude more to deploy. This highlights the importance of domain-specific models for education prediction tasks and the fact that current closed source LLMs should not be used as a universal solution for all tasks.
Related papers
- Can LLM Annotations Replace User Clicks for Learning to Rank? [112.2254432364736]
Large-scale supervised data is essential for training modern ranking models, but obtaining high-quality human annotations is costly.<n>Click data has been widely used as a low-cost alternative, and with recent advances in large language models (LLMs), LLM-based relevance annotation has emerged as another promising annotation.<n> Experiments on both a public dataset, TianGong-ST, and an industrial dataset, Baidu-Click, show that click-supervised models perform better on high-frequency queries.<n>We explore two training strategies -- data scheduling and frequency-aware multi-objective learning -- that integrate both supervision signals.
arXiv Detail & Related papers (2025-11-10T02:26:14Z) - Does Model Size Matter? A Comparison of Small and Large Language Models for Requirements Classification [4.681300232651754]
Large language models (LLMs) show notable results in natural language processing (NLP) tasks for requirements engineering (RE)<n>In contrast, small language models (SLMs) offer a lightweight, locally deployable alternative.
arXiv Detail & Related papers (2025-10-24T13:20:30Z) - Efficient Knowledge Probing of Large Language Models by Adapting Pre-trained Embeddings [27.08405655200845]
Large language models (LLMs) acquire knowledge across diverse domains such as science, history, and geography.<n>These methods require making forward passes through the underlying model to probe the LLM's knowledge about a specific fact.<n>We propose embedding models that effectively encode factual knowledge as text or graphs as proxies for LLMs.
arXiv Detail & Related papers (2025-08-08T05:32:31Z) - An Empirical Study of Many-to-Many Summarization with Large Language Models [82.10000188179168]
Large language models (LLMs) have shown strong multi-lingual abilities, giving them the potential to perform Many-to-many summarization (M2MS) in real applications.<n>This work presents a systematic empirical study on LLMs' M2MS ability.
arXiv Detail & Related papers (2025-05-19T11:18:54Z) - Small or Large? Zero-Shot or Finetuned? Guiding Language Model Choice for Specialized Applications in Healthcare [0.6880206021209538]
Finetuning significantly improved SLM performance across all scenarios compared to their zero-shot results.<n> domain-adjacent SLMs generally performed better than the generic SLM after finetuning, especially on harder tasks.<n>Further domain-specific pretraining yielded modest gains on easier tasks but significant improvements on the complex, data-scarce task.
arXiv Detail & Related papers (2025-04-29T21:50:06Z) - SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Small Models, Big Insights: Leveraging Slim Proxy Models To Decide When and What to Retrieve for LLMs [60.40396361115776]
This paper introduces a novel collaborative approach, namely SlimPLM, that detects missing knowledge in large language models (LLMs) with a slim proxy model.
We employ a proxy model which has far fewer parameters, and take its answers as answers.
Heuristic answers are then utilized to predict the knowledge required to answer the user question, as well as the known and unknown knowledge within the LLM.
arXiv Detail & Related papers (2024-02-19T11:11:08Z) - Supervised Knowledge Makes Large Language Models Better In-context Learners [94.89301696512776]
Large Language Models (LLMs) exhibit emerging in-context learning abilities through prompt engineering.
The challenge of improving the generalizability and factuality of LLMs in natural language understanding and question answering remains under-explored.
We propose a framework that enhances the reliability of LLMs as it: 1) generalizes out-of-distribution data, 2) elucidates how LLMs benefit from discriminative models, and 3) minimizes hallucinations in generative tasks.
arXiv Detail & Related papers (2023-12-26T07:24:46Z) - LLM-Pruner: On the Structural Pruning of Large Language Models [65.02607075556742]
Large language models (LLMs) have shown remarkable capabilities in language understanding and generation.
We tackle the compression of LLMs within the bound of two constraints: being task-agnostic and minimizing the reliance on the original training dataset.
Our method, named LLM-Pruner, adopts structural pruning that selectively removes non-critical coupled structures.
arXiv Detail & Related papers (2023-05-19T12:10:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.