Measuring Five Accountable Talk Moves to Improve Instruction at Scale
- URL: http://arxiv.org/abs/2311.10749v1
- Date: Thu, 2 Nov 2023 03:04:50 GMT
- Title: Measuring Five Accountable Talk Moves to Improve Instruction at Scale
- Authors: Ashlee Kupor, Candice Morgan, and Dorottya Demszky
- Abstract summary: We fine-tune models to identify five instructional talk moves inspired by accountable talk theory.
We correlate the instructors' use of each talk move with indicators of student engagement and satisfaction.
These results corroborate previous research on the effectiveness of accountable talk moves.
- Score: 1.4549461207028445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing consistent, individualized feedback to teachers on their
instruction can improve student learning outcomes. Such feedback can especially
benefit novice instructors who teach on online platforms and have limited
access to instructional training. To build scalable measures of instruction, we
fine-tune RoBERTa and GPT models to identify five instructional talk moves
inspired by accountable talk theory: adding on, connecting, eliciting, probing
and revoicing students' ideas. We fine-tune these models on a newly annotated
dataset of 2500 instructor utterances derived from transcripts of small group
instruction in an online computer science course, Code in Place. Although we
find that GPT-3 consistently outperforms RoBERTa in terms of precision, its
recall varies significantly. We correlate the instructors' use of each talk
move with indicators of student engagement and satisfaction, including
students' section attendance, section ratings, and assignment completion rates.
We find that using talk moves generally correlates positively with student
outcomes, and connecting student ideas has the largest positive impact. These
results corroborate previous research on the effectiveness of accountable talk
moves and provide exciting avenues for using these models to provide
instructors with useful, scalable feedback.
Related papers
- Leveraging Large Language Models for Actionable Course Evaluation Student Feedback to Lecturers [6.161370712594005]
We have 742 student responses ranging over 75 courses in a Computer Science department.
For each course, we synthesise a summary of the course evaluations and actionable items for the instructor.
Our work highlights the possibility of using generative AI to produce factual, actionable, and appropriate feedback for teachers in the classroom setting.
arXiv Detail & Related papers (2024-07-01T13:29:55Z) - Representational Alignment Supports Effective Machine Teaching [81.19197059407121]
We integrate insights from machine teaching and pragmatic communication with the literature on representational alignment.
We design a supervised learning environment that disentangles representational alignment from teacher accuracy.
arXiv Detail & Related papers (2024-06-06T17:48:24Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - Large Language Model-Driven Classroom Flipping: Empowering
Student-Centric Peer Questioning with Flipped Interaction [3.1473798197405953]
This paper investigates a pedagogical approach of classroom flipping based on flipped interaction in large language models.
Flipped interaction involves using language models to prioritize generating questions instead of answers to prompts.
We propose a workflow to integrate prompt engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a quiz-prompt-discuss routine.
arXiv Detail & Related papers (2023-11-14T15:48:19Z) - Instruction-following Evaluation through Verbalizer Manipulation [64.73188776428799]
We propose a novel instruction-following evaluation protocol called verbalizer manipulation.
It instructs the model to verbalize the task label with words aligning with model priors to different extents.
We observe that the instruction-following abilities of models, across different families and scales, are significantly distinguished by their performance on less natural verbalizers.
arXiv Detail & Related papers (2023-07-20T03:54:24Z) - Can Language Models Teach Weaker Agents? Teacher Explanations Improve
Students via Personalization [84.86241161706911]
We show that teacher LLMs can indeed intervene on student reasoning to improve their performance.
We also demonstrate that in multi-turn interactions, teacher explanations generalize and learn from explained data.
We verify that misaligned teachers can lower student performance to random chance by intentionally misleading them.
arXiv Detail & Related papers (2023-06-15T17:27:20Z) - Is ChatGPT a Good Teacher Coach? Measuring Zero-Shot Performance For
Scoring and Providing Actionable Insights on Classroom Instruction [5.948322127194399]
We investigate whether generative AI could become a cost-effective complement to expert feedback by serving as an automated teacher coach.
We propose three teacher coaching tasks for generative AI: (A) scoring transcript segments based on classroom observation instruments, (B) identifying highlights and missed opportunities for good instructional strategies, and (C) providing actionable suggestions for eliciting more student reasoning.
We recruit expert math teachers to evaluate the zero-shot performance of ChatGPT on each of these tasks for elementary classroom math transcripts.
arXiv Detail & Related papers (2023-06-05T17:59:21Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - The NCTE Transcripts: A Dataset of Elementary Math Classroom Transcripts [4.931378519409227]
We introduce the largest dataset of mathematics classroom transcripts available to researchers.
The dataset consists of 1,660 minute long 4th and 5th grade elementary mathematics observations.
The anonymized transcripts represent data from 317 teachers across 4 school districts that serve largely marginalized students.
arXiv Detail & Related papers (2022-11-21T19:00:01Z) - The TalkMoves Dataset: K-12 Mathematics Lesson Transcripts Annotated for
Teacher and Student Discursive Moves [8.090330715662962]
This paper describes the TalkMoves dataset, composed of 567 human-annotated K-12 mathematics lesson transcripts.
The dataset can be used by educators, policymakers, and researchers to understand the nature of teacher and student discourse in K-12 math classrooms.
arXiv Detail & Related papers (2022-04-06T18:12:30Z) - Iterative Teacher-Aware Learning [136.05341445369265]
In human pedagogy, teachers and students can interact adaptively to maximize communication efficiency.
We propose a gradient optimization based teacher-aware learner who can incorporate teacher's cooperative intention into the likelihood function.
arXiv Detail & Related papers (2021-10-01T00:27:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.