Peer attention enhances student learning
- URL: http://arxiv.org/abs/2312.02358v1
- Date: Mon, 4 Dec 2023 21:36:58 GMT
- Title: Peer attention enhances student learning
- Authors: Songlin Xu, Dongyin Hu, Ru Wang, and Xinyu Zhang
- Abstract summary: We show that displaying peer visual attention regions when students watch online course videos enhances their focus and engagement.
Students retain adaptability in following peer attention cues.
They also offer insights into designing adaptive online learning interventions leveraging peer attention modelling to optimize student attentiveness and success.
- Score: 12.375142583471678
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Human visual attention is susceptible to social influences. In education,
peer effects impact student learning, but their precise role in modulating
attention remains unclear. Our experiment (N=311) demonstrates that displaying
peer visual attention regions when students watch online course videos enhances
their focus and engagement. However, students retain adaptability in following
peer attention cues. Overall, guided peer attention improves learning
experiences and outcomes. These findings elucidate how peer visual attention
shapes students' gaze patterns, deepening understanding of peer influence on
learning. They also offer insights into designing adaptive online learning
interventions leveraging peer attention modelling to optimize student
attentiveness and success.
Related papers
- Shifting Focus with HCEye: Exploring the Dynamics of Visual Highlighting and Cognitive Load on User Attention and Saliency Prediction [3.2873782624127834]
This paper examines the joint impact of visual highlighting (permanent and dynamic) and dual-task-induced cognitive load on gaze behaviour.
We show that state-of-the-art saliency models increase their performance when accounting for different cognitive loads.
arXiv Detail & Related papers (2024-04-22T14:45:30Z) - A Message Passing Perspective on Learning Dynamics of Contrastive
Learning [60.217972614379065]
We show that if we cast a contrastive objective equivalently into the feature space, then its learning dynamics admits an interpretable form.
This perspective also establishes an intriguing connection between contrastive learning and Message Passing Graph Neural Networks (MP-GNNs)
arXiv Detail & Related papers (2023-03-08T08:27:31Z) - StuArt: Individualized Classroom Observation of Students with Automatic
Behavior Recognition and Tracking [22.850362142924975]
StuArt is a novel automatic system designed for the individualized classroom observation.
It can recognize five representative student behaviors that are highly related to the engagement and track their variation trends during the course.
It adopts various user-friendly visualization designs to help instructors quickly understand the individual and whole learning status.
arXiv Detail & Related papers (2022-11-06T14:08:04Z) - Real-time Attention Span Tracking in Online Education [0.0]
This paper intends to provide a mechanism that uses the camera feed and microphone input to monitor the real-time attention level of students during online classes.
We propose a system that uses five distinct non-verbal features to calculate the attention score of the student during computer based tasks and generate real-time feedback for both students and the organization.
arXiv Detail & Related papers (2021-11-29T17:05:59Z) - Attention Mechanisms in Computer Vision: A Survey [75.6074182122423]
We provide a comprehensive review of various attention mechanisms in computer vision.
We categorize them according to approach, such as channel attention, spatial attention, temporal attention and branch attention.
We suggest future directions for attention mechanism research.
arXiv Detail & Related papers (2021-11-15T09:18:40Z) - Counterfactual Attention Learning for Fine-Grained Visual Categorization
and Re-identification [101.49122450005869]
We present a counterfactual attention learning method to learn more effective attention based on causal inference.
Specifically, we analyze the effect of the learned visual attention on network prediction.
We evaluate our method on a wide range of fine-grained recognition tasks.
arXiv Detail & Related papers (2021-08-19T14:53:40Z) - The Wits Intelligent Teaching System: Detecting Student Engagement
During Lectures Using Convolutional Neural Networks [0.30458514384586394]
The Wits Intelligent Teaching System (WITS) aims to assist lecturers with real-time feedback regarding student affect.
A CNN based on AlexNet is successfully trained and which significantly outperforms a Support Vector Machine approach.
arXiv Detail & Related papers (2021-05-28T12:59:37Z) - Exploring Visual Engagement Signals for Representation Learning [56.962033268934015]
We present VisE, a weakly supervised learning approach, which maps social images to pseudo labels derived by clustered engagement signals.
We then study how models trained in this way benefit subjective downstream computer vision tasks such as emotion recognition or political bias detection.
arXiv Detail & Related papers (2021-04-15T20:50:40Z) - Deep Reinforced Attention Learning for Quality-Aware Visual Recognition [73.15276998621582]
We build upon the weakly-supervised generation mechanism of intermediate attention maps in any convolutional neural networks.
We introduce a meta critic network to evaluate the quality of attention maps in the main network.
arXiv Detail & Related papers (2020-07-13T02:44:38Z) - Does Visual Self-Supervision Improve Learning of Speech Representations
for Emotion Recognition? [63.564385139097624]
This work investigates visual self-supervision via face reconstruction to guide the learning of audio representations.
We show that a multi-task combination of the proposed visual and audio self-supervision is beneficial for learning richer features.
We evaluate our learned audio representations for discrete emotion recognition, continuous affect recognition and automatic speech recognition.
arXiv Detail & Related papers (2020-05-04T11:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.