Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction
- URL: http://arxiv.org/abs/2311.14708v1
- Date: Tue, 14 Nov 2023 15:48:19 GMT
- Title: Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction
- Authors: Chee Wei Tan
- Abstract summary: This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
- Score: 3.1473798197405953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reciprocal questioning is essential for effective teaching and learning,
fostering active engagement and deeper understanding through collaborative
interactions, especially in large classrooms. Can large language model (LLM),
such as OpenAI's GPT (Generative Pre-trained Transformer) series, assist in
this? This paper investigates a pedagogical approach of classroom flipping
based on flipped interaction in LLMs. Flipped interaction involves using
language models to prioritize generating questions instead of answers to
prompts. We demonstrate how traditional classroom flipping techniques,
including Peer Instruction and Just-in-Time Teaching (JiTT), can be enhanced
through flipped interaction techniques, creating student-centric questions for
hybrid teaching. In particular, we propose a workflow to integrate prompt
engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a
quiz-prompt-discuss routine to empower students to self-regulate their learning
capacity and enable teachers to swiftly personalize training pathways. We
develop an LLM-driven chatbot software that digitizes various elements of
classroom flipping and facilitates the assessment of students using these
routines to deliver peer-generated questions. We have applied our LLM-driven
chatbot software for teaching both undergraduate and graduate students from
2020 to 2022, effectively useful for bridging the gap between teachers and
students in remote teaching during the COVID-19 pandemic years. In particular,
LLM-driven classroom flipping can be particularly beneficial in large class
settings to optimize teaching pace and enable engaging classroom experiences.
Related papers
- INTERACT: Enabling Interactive, Question-Driven Learning in Large Language Models [15.825663946923289]
Large language models (LLMs) excel at answering questions but remain passive learners--absorbing static data without the ability to question and refine knowledge.
This paper explores how LLMs can transition to interactive, question-driven learning through student-teacher dialogues.
arXiv Detail & Related papers (2024-12-16T02:28:53Z) - Oversight in Action: Experiences with Instructor-Moderated LLM Responses in an Online Discussion Forum [2.86800540498016]
This paper presents the design, deployment, and evaluation of a bot' module that is controlled by the instructor.
The bot generates draft responses to student questions, which are reviewed, modified, and approved before release.
We report our experiences using this tool in a 12-week second-year software engineering course on object-oriented programming.
arXiv Detail & Related papers (2024-12-12T08:17:33Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.
We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Awaking the Slides: A Tuning-free and Knowledge-regulated AI Tutoring System via Language Model Coordination [52.20542825755132]
We develop Slide2Lecture, a tuning-free and knowledge-regulated intelligent tutoring system.
It can effectively convert an input lecture slide into a structured teaching agenda consisting of a set of heterogeneous teaching actions.
For teachers and developers, Slide2Lecture enables customization to cater to personalized demands.
arXiv Detail & Related papers (2024-09-11T16:03:09Z) - How Do Students Interact with an LLM-powered Virtual Teaching Assistant in Different Educational Settings? [3.9134031118910264]
Jill Watson, a virtual teaching assistant powered by LLMs, answers student questions and engages them in extended conversations on courseware provided by the instructors.
In this paper, we analyze student interactions with Jill across multiple courses and colleges.
We find that, by supporting a wide range of cognitive demands, Jill encourages students to engage in sophisticated, higher-order cognitive questions.
arXiv Detail & Related papers (2024-07-15T01:22:50Z) - Investigation of the effectiveness of applying ChatGPT in Dialogic Teaching Using Electroencephalography [6.34494999013996]
Large language models (LLMs) possess the capability to interpret knowledge, answer questions, and consider context.
This research recruited 34 undergraduate students as participants, who were randomly divided into two groups.
The experimental group engaged in dialogic teaching using ChatGPT, while the control group interacted with human teachers.
arXiv Detail & Related papers (2024-03-25T12:23:12Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - What Would a Teacher Do? Predicting Future Talk Moves [19.952531500315757]
We introduce a new task, called future talk move prediction (FTMP)
It consists of predicting the next talk move given a conversation history with its corresponding talk moves.
We introduce a neural network model for this task, which outperforms multiple baselines by a large margin.
arXiv Detail & Related papers (2021-06-09T17:45:16Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.