Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions
- URL: http://arxiv.org/abs/2404.03429v1
- Date: Thu, 4 Apr 2024 13:22:28 GMT
- Title: Scaffolding Language Learning via Multi-modal Tutoring Systems with Pedagogical Instructions
- Authors: Zhengyuan Liu, Stella Xin Yin, Carolyn Lee, Nancy F. Chen,
- Abstract summary: Intelligent tutoring systems (ITSs) imitate human tutors and aim to provide customized instructions or feedback to learners.
With the emergence of generative artificial intelligence, large language models (LLMs) entitle the systems to complex and coherent conversational interactions.
We investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning.
- Score: 34.760230622675365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Intelligent tutoring systems (ITSs) that imitate human tutors and aim to provide immediate and customized instructions or feedback to learners have shown their effectiveness in education. With the emergence of generative artificial intelligence, large language models (LLMs) further entitle the systems to complex and coherent conversational interactions. These systems would be of great help in language education as it involves developing skills in communication, which, however, drew relatively less attention. Additionally, due to the complicated cognitive development at younger ages, more endeavors are needed for practical uses. Scaffolding refers to a teaching technique where teachers provide support and guidance to students for learning and developing new concepts or skills. It is an effective way to support diverse learning needs, goals, processes, and outcomes. In this work, we investigate how pedagogical instructions facilitate the scaffolding in ITSs, by conducting a case study on guiding children to describe images for language learning. We construct different types of scaffolding tutoring systems grounded in four fundamental learning theories: knowledge construction, inquiry-based learning, dialogic teaching, and zone of proximal development. For qualitative and quantitative analyses, we build and refine a seven-dimension rubric to evaluate the scaffolding process. In our experiment on GPT-4V, we observe that LLMs demonstrate strong potential to follow pedagogical instructions and achieve self-paced learning in different student groups. Moreover, we extend our evaluation framework from a manual to an automated approach, paving the way to benchmark various conversational tutoring systems.
Related papers
- BIPED: Pedagogically Informed Tutoring System for ESL Education [11.209992106075788]
Large Language Models (LLMs) have a great potential to serve as readily available and cost-efficient Conversational Intelligent Tutoring Systems (CITS)
Existing CITS are designed to teach only simple concepts or lack the pedagogical depth necessary to address diverse learning strategies.
We construct a BIlingual PEDagogically-informed Tutoring dataset of one-on-one, human-to-human English tutoring interactions.
arXiv Detail & Related papers (2024-06-05T17:49:24Z) - Personality-aware Student Simulation for Conversational Intelligent Tutoring Systems [34.760230622675365]
Intelligent Tutoring Systems (ITSs) can provide personalized and self-paced learning experience.
The emergence of large language models (LLMs) further enables better human-machine interaction.
LLMs can produce diverse student responses according to the given language ability and personality traits.
arXiv Detail & Related papers (2024-04-10T06:03:13Z) - Enhancing Instructional Quality: Leveraging Computer-Assisted Textual
Analysis to Generate In-Depth Insights from Educational Artifacts [13.617709093240231]
We examine how artificial intelligence (AI) and machine learning (ML) methods can analyze educational content, teacher discourse, and student responses to foster instructional improvement.
We identify key areas where AI/ML integration offers significant advantages, including teacher coaching, student support, and content development.
This paper emphasizes the importance of aligning AI/ML technologies with pedagogical goals to realize their full potential in educational settings.
arXiv Detail & Related papers (2024-03-06T18:29:18Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Empowering Private Tutoring by Chaining Large Language Models [87.76985829144834]
This work explores the development of a full-fledged intelligent tutoring system powered by state-of-the-art large language models (LLMs)
The system is into three inter-connected core processes-interaction, reflection, and reaction.
Each process is implemented by chaining LLM-powered tools along with dynamically updated memory modules.
arXiv Detail & Related papers (2023-09-15T02:42:03Z) - Thinking beyond chatbots' threat to education: Visualizations to
elucidate the writing and coding process [0.0]
The landscape of educational practices for teaching and learning languages has been predominantly centered around outcome-driven approaches.
The recent accessibility of large language models has thoroughly disrupted these approaches.
This work presents a new set of visualization tools to summarize the inherent and taught capabilities of a learner's writing or programming process.
arXiv Detail & Related papers (2023-04-25T22:11:29Z) - Strategize Before Teaching: A Conversational Tutoring System with
Pedagogy Self-Distillation [35.11534904787774]
We propose a unified framework that combines teaching response generation and pedagogical strategy prediction.
Our experiments and analyses shed light on how teaching strategies affect dialog tutoring.
arXiv Detail & Related papers (2023-02-27T03:43:25Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - Rethinking Supervised Learning and Reinforcement Learning in
Task-Oriented Dialogue Systems [58.724629408229205]
We demonstrate how traditional supervised learning and a simulator-free adversarial learning method can be used to achieve performance comparable to state-of-the-art RL-based methods.
Our main goal is not to beat reinforcement learning with supervised learning, but to demonstrate the value of rethinking the role of reinforcement learning and supervised learning in optimizing task-oriented dialogue systems.
arXiv Detail & Related papers (2020-09-21T12:04:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.