Simulating Classroom Education with LLM-Empowered Agents
- URL: http://arxiv.org/abs/2406.19226v2
- Date: Wed, 27 Nov 2024 08:50:24 GMT
- Title: Simulating Classroom Education with LLM-Empowered Agents
- Authors: Zheyuan Zhang, Daniel Zhang-Li, Jifan Yu, Linlu Gong, Jinchang Zhou, Zhanxin Hao, Jianxiao Jiang, Jie Cao, Huiqin Liu, Zhiyuan Liu, Lei Hou, Juanzi Li,
- Abstract summary: Large language models (LLMs) have been applied across various intelligent educational tasks to assist teaching.
We propose SimClass, a multi-agent classroom simulation teaching framework.
We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching.
- Score: 48.26286735827104
- License:
- Abstract: Large language models (LLMs) have been applied across various intelligent educational tasks to assist teaching. While preliminary studies have focused on task-specific, independent LLM-empowered agents, the potential of LLMs within a multi-agent collaborative framework for classroom simulation with real user participation remains unexplored. In this work, we propose SimClass, a multi-agent classroom simulation teaching framework. We recognize representative class roles and introduce a novel class control mechanism for automatic classroom teaching, and conduct user experiments in two real-world courses. Using the Flanders Interactive Analysis System and Community of Inquiry theoretical frameworks from educational analysis, we demonstrate that LLMs can simulate a dynamic learning environment for users with active teacher-student and student-student interactions. We also observe group behaviors among agents in SimClass, where agents collaborate to create enlivening interactions in classrooms to improve user learning process. We hope this work pioneers the application of LLM-empowered multi-agent systems in virtual classroom teaching.
Related papers
- MALT: Improving Reasoning with Multi-Agent LLM Training [64.13803241218886]
We present a first step toward "Multi-agent LLM training" (MALT) on reasoning problems.
Our approach employs a sequential multi-agent setup with heterogeneous LLMs assigned specialized roles.
We evaluate our approach across MATH, GSM8k, and CQA, where MALT on Llama 3.1 8B models achieves relative improvements of 14.14%, 7.12%, and 9.40% respectively.
arXiv Detail & Related papers (2024-12-02T19:30:36Z) - Students Rather Than Experts: A New AI For Education Pipeline To Model More Human-Like And Personalised Early Adolescences [11.576679362717478]
This study focuses on language learning as a context for modeling virtual student agents.
By curating a dataset of personalized teacher-student interactions with various personality traits, we conduct multi-dimensional evaluation experiments.
arXiv Detail & Related papers (2024-10-21T07:18:24Z) - ConML: A Universal Meta-Learning Framework with Task-Level Contrastive Learning [49.447777286862994]
ConML is a universal meta-learning framework that can be applied to various meta-learning algorithms.
We demonstrate that ConML integrates seamlessly with optimization-based, metric-based, and amortization-based meta-learning algorithms.
arXiv Detail & Related papers (2024-10-08T12:22:10Z) - Synergistic Simulations: Multi-Agent Problem Solving with Large Language Models [36.571597246832326]
Large Language Models (LLMs) have increasingly demonstrated the ability to facilitate the development of multi-agent systems.
This paper aims to integrate agents & world interaction into a single simulation where multiple agents can work together to solve a problem.
We implement two simulations: a physical studio apartment with two roommates, and another where agents collaborate to complete a programming task.
arXiv Detail & Related papers (2024-09-14T21:53:35Z) - MathVC: An LLM-Simulated Multi-Character Virtual Classroom for Mathematics Education [18.449515431619837]
Large language models (LLMs) have recently demonstrated strong capability in both modeling mathematical problems and simulating characters.
We present MATHVC, the very first LLM-powered virtual classroom containing multiple LLM-simulated student characters.
We propose three innovations: integrating MM domain knowledge into the simulation, defining a symbolic schema as the ground for character simulation, and designing a meta planner at the platform level to drive the conversational procedure.
arXiv Detail & Related papers (2024-04-10T03:35:51Z) - ST-LLM: Large Language Models Are Effective Temporal Learners [58.79456373423189]
Large Language Models (LLMs) have showcased impressive capabilities in text comprehension and generation.
How to effectively encode and understand videos in video-based dialogue systems remains to be solved.
We propose ST-LLM, an effective video-LLM baseline with spatial-temporal sequence modeling inside LLM.
arXiv Detail & Related papers (2024-03-30T10:11:26Z) - Experiential Co-Learning of Software-Developing Agents [83.34027623428096]
Large language models (LLMs) have brought significant changes to various domains, especially in software development.
We introduce Experiential Co-Learning, a novel LLM-agent learning framework.
Experiments demonstrate that the framework enables agents to tackle unseen software-developing tasks more effectively.
arXiv Detail & Related papers (2023-12-28T13:50:42Z) - CGMI: Configurable General Multi-Agent Interaction Framework [0.0]
General Multi-Agent Interaction (CGMI) framework designed to replicate human interactions in real-world scenarios.
We propose a tree-structured methodology for the assignment, detection, and maintenance of agent personality.
We have also integrated general agents to augment the virtual environment's realism.
arXiv Detail & Related papers (2023-08-24T02:03:29Z) - AgentBench: Evaluating LLMs as Agents [88.45506148281379]
Large Language Models (LLMs) are becoming increasingly smart and autonomous, targeting real-world pragmatic missions beyond traditional NLP tasks.
We present AgentBench, a benchmark that currently consists of 8 distinct environments to assess LLM-as-Agent's reasoning and decision-making abilities.
arXiv Detail & Related papers (2023-08-07T16:08:11Z) - Parallel Knowledge Transfer in Multi-Agent Reinforcement Learning [0.2538209532048867]
This paper proposes a novel knowledge transfer framework in MARL, PAT (Parallel Attentional Transfer)
We design two acting modes in PAT, student mode and self-learning mode.
When agents are unfamiliar with the environment, the shared attention mechanism in student mode effectively selects learning knowledge from other agents to decide agents' actions.
arXiv Detail & Related papers (2020-03-29T17:42:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.