EduPlanner: LLM-Based Multi-Agent Systems for Customized and Intelligent Instructional Design
- URL: http://arxiv.org/abs/2504.05370v1
- Date: Mon, 07 Apr 2025 17:49:12 GMT
- Title: EduPlanner: LLM-Based Multi-Agent Systems for Customized and Intelligent Instructional Design
- Authors: Xueqiao Zhang, Chao Zhang, Jianwen Sun, Jun Xiao, Yi Yang, Yawei Luo,
- Abstract summary: Large Language Models (LLMs) have significantly advanced smart education in the Artificial General Intelligence (AGI) era.<n>EduPlanner is an LLM-based multiagent system comprising an evaluator agent, an agent, and a question analyst.<n>EduPlanner generates customized and intelligent instructional design for curriculum and learning activities.
- Score: 31.595008625025134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) have significantly advanced smart education in the Artificial General Intelligence (AGI) era. A promising application lies in the automatic generalization of instructional design for curriculum and learning activities, focusing on two key aspects: (1) Customized Generation: generating niche-targeted teaching content based on students' varying learning abilities and states, and (2) Intelligent Optimization: iteratively optimizing content based on feedback from learning effectiveness or test scores. Currently, a single large LLM cannot effectively manage the entire process, posing a challenge for designing intelligent teaching plans. To address these issues, we developed EduPlanner, an LLM-based multi-agent system comprising an evaluator agent, an optimizer agent, and a question analyst, working in adversarial collaboration to generate customized and intelligent instructional design for curriculum and learning activities. Taking mathematics lessons as our example, EduPlanner employs a novel Skill-Tree structure to accurately model the background mathematics knowledge of student groups, personalizing instructional design for curriculum and learning activities according to students' knowledge levels and learning abilities. Additionally, we introduce the CIDDP, an LLM-based five-dimensional evaluation module encompassing clarity, Integrity, Depth, Practicality, and Pertinence, to comprehensively assess mathematics lesson plan quality and bootstrap intelligent optimization. Experiments conducted on the GSM8K and Algebra datasets demonstrate that EduPlanner excels in evaluating and optimizing instructional design for curriculum and learning activities. Ablation studies further validate the significance and effectiveness of each component within the framework. Our code is publicly available at https://github.com/Zc0812/Edu_Planner
Related papers
- Fine-Tuning Large Language Models for Educational Support: Leveraging Gagne's Nine Events of Instruction for Lesson Planning [5.022835754140817]
This study investigates how large language models (LLMs) can enhance teacher preparation by incorporating them with Gagne's Nine Events of Instruction.<n>The research starts with creating a comprehensive dataset based on math curriculum standards and Gagne's instructional events.<n>The second method uses specialized datasets to fine-tune open-source models, enhancing their educational content generation and analysis capabilities.
arXiv Detail & Related papers (2025-03-12T11:22:13Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.<n>It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.<n>GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Synthetic Data (Almost) from Scratch: Generalized Instruction Tuning for
Language Models [153.14575887549088]
We introduce Generalized Instruction Tuning (called GLAN), a general and scalable method for instruction tuning of Large Language Models (LLMs)
GLAN exclusively utilizes a pre-curated taxonomy of human knowledge and capabilities as input and generates large-scale synthetic instruction data across all disciplines.
With the fine-grained key concepts detailed in every class session of the syllabus, we are able to generate diverse instructions with a broad coverage across the entire spectrum of human knowledge and skills.
arXiv Detail & Related papers (2024-02-20T15:00:35Z) - Towards Goal-oriented Intelligent Tutoring Systems in Online Education [69.06930979754627]
We propose a new task, named Goal-oriented Intelligent Tutoring Systems (GITS)
GITS aims to enable the student's mastery of a designated concept by strategically planning a customized sequence of exercises and assessment.
We propose a novel graph-based reinforcement learning framework, named Planning-Assessment-Interaction (PAI)
arXiv Detail & Related papers (2023-12-03T12:37:16Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Scaling Evidence-based Instructional Design Expertise through Large
Language Models [0.0]
This paper explores leveraging Large Language Models (LLMs), specifically GPT-4, in the field of instructional design.
With a focus on scaling evidence-based instructional design expertise, our research aims to bridge the gap between theoretical educational studies and practical implementation.
We discuss the benefits and limitations of AI-driven content generation, emphasizing the necessity of human oversight in ensuring the quality of educational materials.
arXiv Detail & Related papers (2023-05-31T17:54:07Z) - CREATOR: Tool Creation for Disentangling Abstract and Concrete Reasoning of Large Language Models [74.22729793816451]
Large Language Models (LLMs) have made significant progress in utilizing tools, but their ability is limited by API availability.
We propose CREATOR, a novel framework that enables LLMs to create their own tools using documentation and code realization.
We evaluate CREATOR on MATH and TabMWP benchmarks, respectively consisting of challenging math competition problems.
arXiv Detail & Related papers (2023-05-23T17:51:52Z) - Automated Graph Self-supervised Learning via Multi-teacher Knowledge
Distillation [43.903582264697974]
This paper studies the problem of how to automatically, adaptively, and dynamically learn instance-level self-supervised learning strategies for each node.
We propose a novel multi-teacher knowledge distillation framework for Automated Graph Self-Supervised Learning (AGSSL)
Experiments on eight datasets show that AGSSL can benefit from multiple pretext tasks, outperforming the corresponding individual tasks.
arXiv Detail & Related papers (2022-10-05T08:39:13Z) - Learning Multi-Objective Curricula for Deep Reinforcement Learning [55.27879754113767]
Various automatic curriculum learning (ACL) methods have been proposed to improve the sample efficiency and final performance of deep reinforcement learning (DRL)
In this paper, we propose a unified automatic curriculum learning framework to create multi-objective but coherent curricula.
In addition to existing hand-designed curricula paradigms, we further design a flexible memory mechanism to learn an abstract curriculum.
arXiv Detail & Related papers (2021-10-06T19:30:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.