Enabling Multi-Agent Systems as Learning Designers: Applying Learning Sciences to AI Instructional Design
- URL: http://arxiv.org/abs/2508.16659v1
- Date: Wed, 20 Aug 2025 14:44:00 GMT
- Title: Enabling Multi-Agent Systems as Learning Designers: Applying Learning Sciences to AI Instructional Design
- Authors: Jiayi Wang, Ruiwei Xiao, Xinying Hou, John Stamper,
- Abstract summary: This study shifts pedagogical expertise from the user's prompt to the LLM's internal architecture.<n>We tested three systems for generating secondary Math and Science learning activities.
- Score: 6.080614844688028
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: K-12 educators are increasingly using Large Language Models (LLMs) to create instructional materials. These systems excel at producing fluent, coherent content, but often lack support for high-quality teaching. The reason is twofold: first, commercial LLMs, such as ChatGPT and Gemini which are among the most widely accessible to teachers, do not come preloaded with the depth of pedagogical theory needed to design truly effective activities; second, although sophisticated prompt engineering can bridge this gap, most teachers lack the time or expertise and find it difficult to encode such pedagogical nuance into their requests. This study shifts pedagogical expertise from the user's prompt to the LLM's internal architecture. We embed the well-established Knowledge-Learning-Instruction (KLI) framework into a Multi-Agent System (MAS) to act as a sophisticated instructional designer. We tested three systems for generating secondary Math and Science learning activities: a Single-Agent baseline simulating typical teacher prompts; a role-based MAS where agents work sequentially; and a collaborative MAS-CMD where agents co-construct activities through conquer and merge discussion. The generated materials were evaluated by 20 practicing teachers and a complementary LLM-as-a-judge system using the Quality Matters (QM) K-12 standards. While the rubric scores showed only small, often statistically insignificant differences between the systems, the qualitative feedback from educators painted a clear and compelling picture. Teachers strongly preferred the activities from the collaborative MAS-CMD, describing them as significantly more creative, contextually relevant, and classroom-ready. Our findings show that embedding pedagogical principles into LLM systems offers a scalable path for creating high-quality educational content.
Related papers
- From Solver to Tutor: Evaluating the Pedagogical Intelligence of LLMs with KMP-Bench [56.66490747967379]
We introduce KMP-Bench, a comprehensive K-8 Mathematical Pedagogical Benchmark designed to assess Large Language Models (LLMs)<n>The first module, KMP-Dialogue, evaluates holistic pedagogical capabilities against six core principles.<n>The second module, KMP-Skills, provides a granular assessment of foundational tutoring abilities, including multi-turn problem-solving, error detection and correction, and problem generation.
arXiv Detail & Related papers (2026-03-03T09:14:57Z) - EduDial: Constructing a Large-scale Multi-turn Teacher-Student Dialogue Corpus [59.693733170193944]
We present EduDial, a comprehensive multi-turn teacher-student dialogue dataset.<n>EduDial covers 345 core knowledge points and consists of 34,250 dialogue sessions generated through interactions between teacher and student agents.
arXiv Detail & Related papers (2025-10-14T18:18:43Z) - Instructional Agents: LLM Agents on Automated Course Material Generation for Teaching Faculties [3.045939700894802]
We present Instructional Agents, a framework designed to automate end-to-end course material generation.<n>The framework simulates role-based collaboration among educational agents to produce cohesive and pedagogically aligned content.<n>It produces high-quality instructional materials while significantly reducing development time and human workload.
arXiv Detail & Related papers (2025-08-27T06:45:06Z) - Improving Student-AI Interaction Through Pedagogical Prompting: An Example in Computer Science Education [1.1517315048749441]
Large language model (LLM) applications have sparked both excitement and concern.<n>Recent studies consistently highlight students' (mis)use of LLMs can hinder learning outcomes.<n>This work aims to teach students how to effectively prompt LLMs to improve their learning.
arXiv Detail & Related papers (2025-06-23T20:39:17Z) - From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning [76.09281171131941]
Large language models (LLMs) can transform education, but their optimization for direct question-answering often undermines effective pedagogy.<n>We propose an online reinforcement learning (RL)-based alignment framework that can quickly adapt LLMs into effective tutors.
arXiv Detail & Related papers (2025-05-21T15:00:07Z) - LLM Agents for Education: Advances and Applications [49.3663528354802]
Large Language Model (LLM) agents have demonstrated remarkable capabilities in automating tasks and driving innovation across diverse educational applications.<n>This survey aims to provide a comprehensive technological overview of LLM agents for education, fostering further research and collaboration to enhance their impact for the greater good of learners and educators alike.
arXiv Detail & Related papers (2025-03-14T11:53:44Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Simulating Classroom Education with LLM-Empowered Agents [48.26286735827104]
Large language models (LLMs) have been applied across various intelligent educational tasks to assist teaching.<n>We propose SimClass, a multi-agent classroom simulation teaching framework.<n>We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching.
arXiv Detail & Related papers (2024-06-27T14:51:07Z) - Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions [34.760230622675365]
Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
arXiv Detail & Related papers (2024-04-04T13:22:28Z) - AutoTutor meets Large Language Models: A Language Model Tutor with Rich Pedagogy and Guardrails [43.19453208130667]
Large Language Models (LLMs) have found several use cases in education, ranging from automatic question generation to essay evaluation.
In this paper, we explore the potential of using Large Language Models (LLMs) to author Intelligent Tutoring Systems.
We create a sample end-to-end tutoring system named MWPTutor, which uses LLMs to fill in the state space of a pre-defined finite state transducer.
arXiv Detail & Related papers (2024-02-14T14:53:56Z) - Mastering Robot Manipulation with Multimodal Prompts through Pretraining and Multi-task Fine-tuning [49.92517970237088]
We tackle the problem of training a robot to understand multimodal prompts.
This type of task poses a major challenge to robots' capability to understand the interconnection and complementarity between vision and language signals.
We introduce an effective framework that learns a policy to perform robot manipulation with multimodal prompts.
arXiv Detail & Related papers (2023-10-14T22:24:58Z) - One Teacher is Enough? Pre-trained Language Model Distillation from
Multiple Teachers [54.146208195806636]
We propose a multi-teacher knowledge distillation framework named MT-BERT for pre-trained language model compression.
We show that MT-BERT can train high-quality student model from multiple teacher PLMs.
Experiments on three benchmark datasets validate the effectiveness of MT-BERT in compressing PLMs.
arXiv Detail & Related papers (2021-06-02T08:42:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.