Examining the Role of LLM-Driven Interactions on Attention and Cognitive Engagement in Virtual Classrooms
- URL: http://arxiv.org/abs/2505.07377v1
- Date: Mon, 12 May 2025 09:21:19 GMT
- Title: Examining the Role of LLM-Driven Interactions on Attention and Cognitive Engagement in Virtual Classrooms
- Authors: Suleyman Ozdel, Can Sarpkaya, Efe Bozkir, Hong Gao, Enkelejda Kasneci,
- Abstract summary: We investigate how peer question-asking behaviors influenced student engagement, attention, cognitive load, and learning outcomes.<n>Our results suggest that peer questions did not introduce extraneous cognitive load directly, as the cognitive load is strongly correlated with increased attention to the learning material.
- Score: 9.241265477406078
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transforming educational technologies through the integration of large language models (LLMs) and virtual reality (VR) offers the potential for immersive and interactive learning experiences. However, the effects of LLMs on user engagement and attention in educational environments remain open questions. In this study, we utilized a fully LLM-driven virtual learning environment, where peers and teachers were LLM-driven, to examine how students behaved in such settings. Specifically, we investigate how peer question-asking behaviors influenced student engagement, attention, cognitive load, and learning outcomes and found that, in conditions where LLM-driven peer learners asked questions, students exhibited more targeted visual scanpaths, with their attention directed toward the learning content, particularly in complex subjects. Our results suggest that peer questions did not introduce extraneous cognitive load directly, as the cognitive load is strongly correlated with increased attention to the learning material. Considering these findings, we provide design recommendations for optimizing VR learning spaces.
Related papers
- Evaluating the Impact of AI-Powered Audiovisual Personalization on Learner Emotion, Focus, and Learning Outcomes [5.753241925582828]
We introduce an AI-powered system that uses LLMs to generate personalized multisensory study environments.<n>Our primary research question investigates how combinations of personalized audiovisual elements affect learner cognitive load and engagement.<n>The findings aim to advance emotionally responsive educational technologies and extend the application of multimodal LLMs into the sensory dimension of self-directed learning.
arXiv Detail & Related papers (2025-05-05T21:19:50Z) - Playpen: An Environment for Exploring Learning Through Conversational Interaction [81.67330926729015]
We investigate whether Dialogue Games can also serve as a source of feedback signals for learning.<n>We introduce Playpen, an environment for off- and online learning through Dialogue Game self-play.<n>We find that imitation learning through SFT improves performance on unseen instances, but negatively impacts other skills.
arXiv Detail & Related papers (2025-04-11T14:49:33Z) - The StudyChat Dataset: Student Dialogues With ChatGPT in an Artificial Intelligence Course [2.1485350418225244]
textbfStudyChat is a publicly available dataset capturing real-world student interactions with an LLM-powered tutor.<n>We deploy a web application that replicates ChatGPT's core functionalities, and use it to log student interactions with the LLM.<n>We analyze these interactions, highlight behavioral trends, and analyze how specific usage patterns relate to course outcomes.
arXiv Detail & Related papers (2025-03-11T00:17:07Z) - INTERACT: Enabling Interactive, Question-Driven Learning in Large Language Models [15.825663946923289]
Large language models (LLMs) excel at answering questions but remain passive learners-absorbing static data without the ability to question and refine knowledge.<n>This paper explores how LLMs can transition to interactive, question-driven learning through student-teacher dialogues.
arXiv Detail & Related papers (2024-12-16T02:28:53Z) - Exploring Knowledge Tracing in Tutor-Student Dialogues using LLMs [49.18567856499736]
We investigate whether large language models (LLMs) can be supportive of open-ended dialogue tutoring.<n>We apply a range of knowledge tracing (KT) methods on the resulting labeled data to track student knowledge levels over an entire dialogue.<n>We conduct experiments on two tutoring dialogue datasets, and show that a novel yet simple LLM-based method, LLMKT, significantly outperforms existing KT methods in predicting student response correctness in dialogues.
arXiv Detail & Related papers (2024-09-24T22:31:39Z) - Exploring Engagement and Perceived Learning Outcomes in an Immersive Flipped Learning Context [0.195804735329484]
The aim of this study was to explore the benefits and challenges of the immersive flipped learning approach in relation to students' online engagement and perceived learning outcomes.
The study revealed high levels of student engagement and perceived learning outcomes, although it also identified areas needing improvement.
The findings of this study can serve as a valuable resource for educators seeking to design engaging and effective remote learning experiences.
arXiv Detail & Related papers (2024-09-19T11:38:48Z) - Emotion Based Prediction in the Context of Optimized Trajectory Planning
for Immersive Learning [0.0]
In the virtual elements of immersive learning, the use of Google Expedition and touch-screen-based emotion are examined.
Pedagogical application, affordances, and cognitive load are the corresponding measures that are involved.
arXiv Detail & Related papers (2023-12-18T09:24:35Z) - Impact of Guidance and Interaction Strategies for LLM Use on Learner Performance and Perception [19.335003380399527]
Large language models (LLMs) offer a promising avenue, with increasing research exploring their educational utility.
Our work highlights the role that teachers can play in shaping LLM-supported learning environments.
arXiv Detail & Related papers (2023-10-13T01:21:52Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - Seeing Differently, Acting Similarly: Imitation Learning with
Heterogeneous Observations [126.78199124026398]
In many real-world imitation learning tasks, the demonstrator and the learner have to act in different but full observation spaces.
In this work, we model the above learning problem as Heterogeneous Observations Learning (HOIL)
We propose the Importance Weighting with REjection (IWRE) algorithm based on the techniques of importance-weighting, learning with rejection, and active querying to solve the key challenge of occupancy measure matching.
arXiv Detail & Related papers (2021-06-17T05:44:04Z) - Exploring Visual Engagement Signals for Representation Learning [56.962033268934015]
We present VisE, a weakly supervised learning approach, which maps social images to pseudo labels derived by clustered engagement signals.
We then study how models trained in this way benefit subjective downstream computer vision tasks such as emotion recognition or political bias detection.
arXiv Detail & Related papers (2021-04-15T20:50:40Z) - Reinforcement Learning with Videos: Combining Offline Observations with
Interaction [151.73346150068866]
Reinforcement learning is a powerful framework for robots to acquire skills from experience.
Videos of humans are a readily available source of broad and interesting experiences.
We propose a framework for reinforcement learning with videos.
arXiv Detail & Related papers (2020-11-12T17:15:48Z) - Knowledge-guided Deep Reinforcement Learning for Interactive
Recommendation [49.32287384774351]
Interactive recommendation aims to learn from dynamic interactions between items and users to achieve responsiveness and accuracy.
We propose Knowledge-Guided deep Reinforcement learning to harness the advantages of both reinforcement learning and knowledge graphs for interactive recommendation.
arXiv Detail & Related papers (2020-04-17T05:26:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.