Brief but Impactful: How Human Tutoring Interactions Shape Engagement in Online Learning
- URL: http://arxiv.org/abs/2601.09994v1
- Date: Thu, 15 Jan 2026 02:09:53 GMT
- Title: Brief but Impactful: How Human Tutoring Interactions Shape Engagement in Online Learning
- Authors: Conrad Borchers, Ashish Gurung, Qinyi Liu, Danielle R. Thomas, Mohammad Khalil, Kenneth R. Koedinger,
- Abstract summary: We study brief human-tutor interactions on Zoom drawn from 2,075 hours of 191 middle school students' classroom math practice.<n>Mixed-effect models reveal that engagement, measured as successful solution steps per minute, is higher during a human-tutor visit.<n>We create analytics that identify which tutor-student dialogues raise engagement the most.
- Score: 1.3597553728687461
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning analytics can guide human tutors to efficiently address motivational barriers to learning that AI systems struggle to support. Students become more engaged when they receive human attention. However, what occurs during short interventions, and when are they most effective? We align student-tutor dialogue transcripts with MATHia tutoring system log data to study brief human-tutor interactions on Zoom drawn from 2,075 hours of 191 middle school students' classroom math practice. Mixed-effect models reveal that engagement, measured as successful solution steps per minute, is higher during a human-tutor visit and remains elevated afterward. Visit length exhibits diminishing returns: engagement rises during and shortly after visits, irrespective of visit length. Timing also matters: later visits yield larger immediate lifts than earlier ones, though an early visit remains important to counteract engagement decline. We create analytics that identify which tutor-student dialogues raise engagement the most. Qualitative analysis reveals that interactions with concrete, stepwise scaffolding with explicit work organization elevate engagement most strongly. We discuss implications for resource-constrained tutoring, prioritizing several brief, well-timed check-ins by a human tutor while ensuring at least one early contact. Our analytics can guide the prioritization of students for support and surface effective tutor moves in real-time.
Related papers
- Sticky Help, Bounded Effects: Session-by-Session Analytics of Teacher Interventions in K-12 Classrooms [4.863262234062219]
This study investigates how students' prior help history and their engagement states shape teachers' decisions.<n>We analyzed 1.4 million student-system interactions from 339 students across 14 classes in the MATHia intelligent tutoring system.<n>Help coincided with immediate learning within sessions, but did not predict skill acquisition in later sessions.
arXiv Detail & Related papers (2026-01-20T02:15:01Z) - Decoding Student Minds: Leveraging Conversational Agents for Psychological and Learning Analysis [0.15293427903448018]
This paper presents a psychologically-aware conversational agent designed to enhance both learning performance and emotional well-being in educational settings.<n>The system combines Large Language Models (LLMs), a knowledge graph-enhanced BERT (KG-BERT), and a bidirectional Long Short-Term Memory (LSTM) with attention to classify students' cognitive and affective states in real time.
arXiv Detail & Related papers (2025-12-11T09:06:45Z) - Reducing Procrastination on Programming Assignments via Optional Early Feedback [1.1458853556386799]
We designed an intervention to combat academic procrastination on programming assignments.<n>The intervention consisted of early deadlines that were not worth marks but provided additional automated feedback if students submitted their work early.<n>Our results implied that starting early alone did not improve students' grades. However, starting early and receiving additional feedback enhanced the students' grades relative to those of the rest of the students.
arXiv Detail & Related papers (2025-10-16T19:22:12Z) - Ensembling Large Language Models to Characterize Affective Dynamics in Student-AI Tutor Dialogues [18.497635186707008]
This work introduces the first ensemble-LLM framework for large-scale affect sensing in tutoring dialogues.<n>We analyzed two semesters' worth of 16,986 conversational turns exchanged between PyTutor, an AI tutor, and 261 undergraduate learners across three U.S. institutions.
arXiv Detail & Related papers (2025-10-13T04:43:56Z) - IntrEx: A Dataset for Modeling Engagement in Educational Conversations [7.526860155587907]
IntrEx is the first large dataset annotated for interestingness and expected interestingness in teacher-student interactions.<n>We employ a rigorous annotation process with over 100 second-language learners.<n>We investigate whether large language models (LLMs) can predict human interestingness judgments.
arXiv Detail & Related papers (2025-09-08T13:07:35Z) - Exploring LLMs for Predicting Tutor Strategy and Student Outcomes in Dialogues [48.99818550820575]
Recent studies have shown that strategies used by tutors can have significant effects on student outcomes.<n>Few works have studied predicting tutor strategy in dialogues.<n>We investigate the ability of modern LLMs, particularly Llama 3 and GPT-4o, to predict both future tutor moves and student outcomes in dialogues.
arXiv Detail & Related papers (2025-07-09T14:47:35Z) - Interactive Dialogue Agents via Reinforcement Learning on Hindsight Regenerations [58.65755268815283]
Many real dialogues are interactive, meaning an agent's utterances will influence their conversational partner, elicit information, or change their opinion.
We use this fact to rewrite and augment existing suboptimal data, and train via offline reinforcement learning (RL) an agent that outperforms both prompting and learning from unaltered human demonstrations.
Our results in a user study with real humans show that our approach greatly outperforms existing state-of-the-art dialogue agents.
arXiv Detail & Related papers (2024-11-07T21:37:51Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - X-TURING: Towards an Enhanced and Efficient Turing Test for Long-Term Dialogue Agents [56.64615470513102]
The Turing test examines whether AIs exhibit human-like behaviour in natural language conversations.<n>Traditional setting limits each participant to one message at a time and requires constant human participation.<n>This paper proposes textbftextscX-Turing, which enhances the original test with a textitburst dialogue pattern.
arXiv Detail & Related papers (2024-08-19T09:57:28Z) - YODA: Teacher-Student Progressive Learning for Language Models [82.0172215948963]
This paper introduces YODA, a teacher-student progressive learning framework.
It emulates the teacher-student education process to improve the efficacy of model fine-tuning.
Experiments show that training LLaMA2 with data from YODA improves SFT with significant performance gain.
arXiv Detail & Related papers (2024-01-28T14:32:15Z) - MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties
Grounded in Math Reasoning Problems [74.73881579517055]
We propose a framework to generate such dialogues by pairing human teachers with a Large Language Model prompted to represent common student errors.
We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues.
arXiv Detail & Related papers (2023-05-23T21:44:56Z) - Opportunities and Challenges in Neural Dialog Tutoring [54.07241332881601]
We rigorously analyze various generative language models on two dialog tutoring datasets for language learning.
We find that although current approaches can model tutoring in constrained learning scenarios, they perform poorly in less constrained scenarios.
Our human quality evaluation shows that both models and ground-truth annotations exhibit low performance in terms of equitable tutoring.
arXiv Detail & Related papers (2023-01-24T11:00:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.