Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems
- URL: http://arxiv.org/abs/2404.06762v1
- Date: Wed, 10 Apr 2024 06:03:13 GMT
- Title: Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems
- Authors: Zhengyuan Liu, Stella Xin Yin, Geyu Lin, Nancy F. Chen,
- Abstract summary: Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience.
The emergence of large language models (LLMs) further enables better human-machine interaction.
LLMs can produce diverse student responses according to the given language ability and personality traits.
- Score: 34.760230622675365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience. The emergence of large language models (LLMs) further enables better human-machine interaction, and facilitates the development of conversational ITSs in various disciplines such as math and language learning. In dialogic teaching, recognizing and adapting to individual characteristics can significantly enhance student engagement and learning efficiency. However, characterizing and simulating student's persona remain challenging in training and evaluating conversational ITSs. In this work, we propose a framework to construct profiles of different student groups by refining and integrating both cognitive and noncognitive aspects, and leverage LLMs for personality-aware student simulation in a language learning scenario. We further enhance the framework with multi-aspect validation, and conduct extensive analysis from both teacher and student perspectives. Our experimental results show that state-of-the-art LLMs can produce diverse student responses according to the given language ability and personality traits, and trigger teacher's adaptive scaffolding strategies.
Related papers
- Beyond Profile: From Surface-Level Facts to Deep Persona Simulation in LLMs [50.0874045899661]
We introduce CharacterBot, a model designed to replicate both the linguistic patterns and distinctive thought processes of a character.
Using Lu Xun as a case study, we propose four training tasks derived from his 17 essay collections.
These include a pre-training task focused on mastering external linguistic structures and knowledge, as well as three fine-tuning tasks.
We evaluate CharacterBot on three tasks for linguistic accuracy and opinion comprehension, demonstrating that it significantly outperforms the baselines on our adapted metrics.
arXiv Detail & Related papers (2025-02-18T16:11:54Z) - One Size doesn't Fit All: A Personalized Conversational Tutoring Agent for Mathematics Instruction [23.0134120158482]
We propose a textbfPersontextbfAlized textbfConversational tutoring agtextbfEnt (PACE) for mathematics instruction.
PACE simulates students' learning styles based on the Felder and Silverman learning style model, aligning with each student's persona.
To further enhance students' comprehension, PACE employs the Socratic teaching method to provide instant feedback and encourage deep thinking.
arXiv Detail & Related papers (2025-02-18T08:24:52Z) - Students Rather Than Experts: A New AI For Education Pipeline To Model More Human-Like And Personalised Early Adolescences [11.576679362717478]
This study focuses on language learning as a context for modeling virtual student agents.
By curating a dataset of personalized teacher-student interactions with various personality traits, we conduct multi-dimensional evaluation experiments.
arXiv Detail & Related papers (2024-10-21T07:18:24Z) - PersLLM: A Personified Training Approach for Large Language Models [66.16513246245401]
We propose PersLLM, integrating psychology-grounded principles of personality: social practice, consistency, and dynamic development.
We incorporate personality traits directly into the model parameters, enhancing the model's resistance to induction, promoting consistency, and supporting the dynamic evolution of personality.
arXiv Detail & Related papers (2024-07-17T08:13:22Z) - SPL: A Socratic Playground for Learning Powered by Large Language Model [5.383689446227398]
Socratic Playground for Learning (SPL) is a dialogue-based ITS powered by the GPT-4 model.
SPL aims to enhance personalized and adaptive learning experiences tailored to individual needs.
arXiv Detail & Related papers (2024-06-20T01:18:52Z) - Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions [34.760230622675365]
Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
arXiv Detail & Related papers (2024-04-04T13:22:28Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Adaptive and Personalized Exercise Generation for Online Language
Learning [39.28263461783446]
We study a novel task of adaptive and personalized exercise generation for online language learning.
We combine a knowledge tracing model that estimates each student's evolving knowledge states from their learning history.
We train and evaluate our model on real-world learner interaction data from Duolingo.
arXiv Detail & Related papers (2023-06-04T20:18:40Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.