Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction
- URL: http://arxiv.org/abs/2311.14708v1
- Date: Tue, 14 Nov 2023 15:48:19 GMT
- Title: Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction
- Authors: Chee Wei Tan
- Abstract summary: This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
- Score: 3.1473798197405953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reciprocal questioning is essential for effective teaching and learning,
fostering active engagement and deeper understanding through collaborative
interactions, especially in large classrooms. Can large language model (LLM),
such as OpenAI's GPT (Generative Pre-trained Transformer) series, assist in
this? This paper investigates a pedagogical approach of classroom flipping
based on flipped interaction in LLMs. Flipped interaction involves using
language models to prioritize generating questions instead of answers to
prompts. We demonstrate how traditional classroom flipping techniques,
including Peer Instruction and Just-in-Time Teaching (JiTT), can be enhanced
through flipped interaction techniques, creating student-centric questions for
hybrid teaching. In particular, we propose a workflow to integrate prompt
engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a
quiz-prompt-discuss routine to empower students to self-regulate their learning
capacity and enable teachers to swiftly personalize training pathways. We
develop an LLM-driven chatbot software that digitizes various elements of
classroom flipping and facilitates the assessment of students using these
routines to deliver peer-generated questions. We have applied our LLM-driven
chatbot software for teaching both undergraduate and graduate students from
2020 to 2022, effectively useful for bridging the gap between teachers and
students in remote teaching during the COVID-19 pandemic years. In particular,
LLM-driven classroom flipping can be particularly beneficial in large class
settings to optimize teaching pace and enable engaging classroom experiences.
Related papers
- Exploring Knowledge Tracing in Tutor-Student Dialogues [53.52699766206808]
We present a first attempt at performing knowledge tracing (KT) in tutor-student dialogues.
We propose methods to identify the knowledge components/skills involved in each dialogue turn.
We then apply a range of KT methods on the resulting labeled data to track student knowledge levels over an entire dialogue.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Awaking the Slides: A Tuning-free and Knowledge-regulated AI Tutoring System via Language Model Coordination [52.20542825755132]
We develop Slide2Lecture, a tuning-free and knowledge-regulated intelligent tutoring system.
It can effectively convert an input lecture slide into a structured teaching agenda consisting of a set of heterogeneous teaching actions.
For teachers and developers, Slide2Lecture enables customization to cater to personalized demands.
arXiv Detail & Related papers (2024-09-11T16:03:09Z) - How Do Students Interact with an LLM-powered Virtual Teaching Assistant in Different Educational Settings? [3.9134031118910264]
Jill Watson, a virtual teaching assistant powered by LLMs, answers student questions and engages them in extended conversations on courseware provided by the instructors.
In this paper, we analyze student interactions with Jill across multiple courses and colleges.
We find that, by supporting a wide range of cognitive demands, Jill encourages students to engage in sophisticated, higher-order cognitive questions.
arXiv Detail & Related papers (2024-07-15T01:22:50Z) - Investigation of the effectiveness of applying ChatGPT in Dialogic Teaching Using Electroencephalography [6.34494999013996]
Large language models (LLMs) possess the capability to interpret knowledge, answer questions, and consider context.
This research recruited 34 undergraduate students as participants, who were randomly divided into two groups.
The experimental group engaged in dialogic teaching using ChatGPT, while the control group interacted with human teachers.
arXiv Detail & Related papers (2024-03-25T12:23:12Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Large Language Model-based System to Provide Immediate Feedback to
Students in Flipped Classroom Preparation Learning [0.0]
This study aimed to solve challenges in the flipped classroom model, such as ensuring that students are emotionally engaged and motivated to learn.
Students often have questions about the content of lecture videos in the preparation of flipped classrooms, but it is difficult for teachers to answer them immediately.
The proposed system was developed using the ChatGPT API on a video-watching support system for preparation learning that is being used in real practice.
arXiv Detail & Related papers (2023-07-21T06:59:53Z) - UKP-SQuARE: An Interactive Tool for Teaching Question Answering [61.93372227117229]
The exponential growth of question answering (QA) has made it an indispensable topic in any Natural Language Processing (NLP) course.
We introduce UKP-SQuARE as a platform for QA education.
Students can run, compare, and analyze various QA models from different perspectives.
arXiv Detail & Related papers (2023-05-31T11:29:04Z) - A Neuroscience Approach regarding Student Engagement in the Classes of
Microcontrollers during the COVID19 Pandemic [0.0]
Arduino and Raspberry Pi boards are studied at the course of Microcontrollers using online simulation environments.
The Emotiv Insight headset is used by the professor during the theoretical and practical hours of the Microcontrollers course.
The approaches used during teaching were inquiry-based learning, game-based learning and personalized learning.
arXiv Detail & Related papers (2021-11-15T16:41:29Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z) - What Would a Teacher Do? Predicting Future Talk Moves [19.952531500315757]
We introduce a new task, called future talk move prediction (FTMP)
It consists of predicting the next talk move given a conversation history with its corresponding talk moves.
We introduce a neural network model for this task, which outperforms multiple baselines by a large margin.
arXiv Detail & Related papers (2021-06-09T17:45:16Z) - Neural Multi-Task Learning for Teacher Question Detection in Online
Classrooms [50.19997675066203]
We build an end-to-end neural framework that automatically detects questions from teachers' audio recordings.
By incorporating multi-task learning techniques, we are able to strengthen the understanding of semantic relations among different types of questions.
arXiv Detail & Related papers (2020-05-16T02:17:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.