AgentTutor: Empowering Personalized Learning with Multi-Turn Interactive Teaching in Intelligent Education Systems
- URL: http://arxiv.org/abs/2601.04219v1
- Date: Wed, 24 Dec 2025 12:26:28 GMT
- Title: AgentTutor: Empowering Personalized Learning with Multi-Turn Interactive Teaching in Intelligent Education Systems
- Authors: Yuxin Liu, Zeqing Song, Jiong Lou, Chentao Wu, Jie Li,
- Abstract summary: AgentTutor is a multi-turn interactive intelligent education system to empower personalized learning.<n>It features an LLM-powered generative multi-agent system and a learner-specific personalized learning profile environment.<n>It includes five key modules: curriculum decomposition, learner assessment, dynamic strategy, teaching reflection, and knowledge & experience memory.
- Score: 11.202091624300062
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of large-scale language models (LLMs) has shown their potential to transform intelligent education systems (IESs) through automated teaching and learning support applications. However, current IESs often rely on single-turn static question-answering, which fails to assess learners' cognitive levels, cannot adjust teaching strategies based on real-time feedback, and is limited to providing simple one-off responses. To address these issues, we introduce AgentTutor, a multi-turn interactive intelligent education system to empower personalized learning. It features an LLM-powered generative multi-agent system and a learner-specific personalized learning profile environment that dynamically optimizes and delivers teaching strategies based on learners' learning status, personalized goals, learning preferences, and multimodal study materials. It includes five key modules: curriculum decomposition, learner assessment, dynamic strategy, teaching reflection, and knowledge & experience memory. We conducted extensive experiments on multiple benchmark datasets, AgentTutor significantly enhances learners' performance while demonstrating strong effectiveness in multi-turn interactions and competitiveness in teaching quality among other baselines.
Related papers
- PATS: Personality-Aware Teaching Strategies with Large Language Model Tutors [66.56586559631516]
Large language models (LLMs) have potential as educational tutors.<n>But different tutoring strategies benefit different student personalities.<n>Despite this, current LLM tutoring systems do not take into account student personality traits.
arXiv Detail & Related papers (2026-01-13T10:17:26Z) - UCO: A Multi-Turn Interactive Reinforcement Learning Method for Adaptive Teaching with Large Language Models [59.693733170193944]
Large language models (LLMs) are shifting from answer providers to intelligent tutors in educational settings.<n>Recent reinforcement learning approaches address this limitation but face two critical challenges.<n>We propose the Unidirectional Cognitive Optimization (UCO) method to address these challenges.
arXiv Detail & Related papers (2025-11-12T01:27:02Z) - Unified Reinforcement and Imitation Learning for Vision-Language Models [84.84277196012907]
Vision-Language Models (VLMs) have achieved remarkable progress, yet their large scale often renders them impractical for resource-constrained environments.<n>This paper introduces Unified Reinforcement and Imitation Learning (RIL), a novel and efficient training algorithm designed to create powerful, lightweight VLMs.
arXiv Detail & Related papers (2025-10-22T07:12:14Z) - Adaptive Learning Systems: Personalized Curriculum Design Using LLM-Powered Analytics [14.157213827899342]
Large language models (LLMs) are revolutionizing the field of education by enabling personalized learning experiences tailored to individual student needs.<n>This paper introduces a framework for Adaptive Learning Systems that leverages LLM-powered analytics for personalized curriculum design.
arXiv Detail & Related papers (2025-07-25T04:36:17Z) - AI-Powered Math Tutoring: Platform for Personalized and Adaptive Education [0.0]
We introduce a novel multi-agent AI tutoring platform that combines adaptive and personalized feedback, structured course generation, and textbook knowledge retrieval.<n>This system allows students to learn new topics while identifying and targeting their weaknesses, revise for exams effectively, and practice on an unlimited number of personalized exercises.
arXiv Detail & Related papers (2025-07-14T20:35:16Z) - Investigating Pedagogical Teacher and Student LLM Agents: Genetic Adaptation Meets Retrieval Augmented Generation Across Learning Style [16.985943868964394]
Effective teaching requires adapting instructional strategies to accommodate the diverse cognitive and behavioral profiles of students.<n>This paper introduces a novel simulation framework that integrates heterogeneous student agents with a self-optimizing teacher agent.<n>Our results highlight the potential of LLM-driven simulations to inform adaptive teaching practices and provide a testbed for training human educators in data-driven environments.
arXiv Detail & Related papers (2025-05-25T14:45:35Z) - From Problem-Solving to Teaching Problem-Solving: Aligning LLMs with Pedagogy using Reinforcement Learning [82.50157695987558]
Large language models (LLMs) can transform education, but their optimization for direct question-answering often undermines effective pedagogy.<n>We propose an online reinforcement learning (RL)-based alignment framework that can quickly adapt LLMs into effective tutors.
arXiv Detail & Related papers (2025-05-21T15:00:07Z) - Enhancing tutoring systems by leveraging tailored promptings and domain knowledge with Large Language Models [2.5362697136900563]
AI-driven tools like ChatGPT and Intelligent Tutoring Systems (ITS) have enhanced learning experiences through personalisation and flexibility.<n>ITSs can adapt to individual learning needs and provide customised feedback based on a student's performance, cognitive state, and learning path.<n>Our research aims to address these gaps by integrating skill-aligned feedback via Retrieval Augmented Generation (RAG) into prompt engineering for Large Language Models (LLMs)
arXiv Detail & Related papers (2025-05-02T02:30:39Z) - Training a Generally Curious Agent [77.61142660542599]
Paprika is a fine-tuning approach that enables language models to develop general decision-making capabilities.<n>Paprika teaches models to explore and adapt their behavior on a new task based on environment feedback in-context without more gradient updates.<n>Results suggest a promising path towards AI systems that can autonomously solve sequential decision-making problems.
arXiv Detail & Related papers (2025-02-24T18:56:58Z) - LLM-powered Multi-agent Framework for Goal-oriented Learning in Intelligent Tutoring System [54.71619734800526]
GenMentor is a multi-agent framework designed to deliver goal-oriented, personalized learning within ITS.<n>It maps learners' goals to required skills using a fine-tuned LLM trained on a custom goal-to-skill dataset.<n>GenMentor tailors learning content with an exploration-drafting-integration mechanism to align with individual learner needs.
arXiv Detail & Related papers (2025-01-27T03:29:44Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.