Screen Matters: Cognitive and Behavioral Divergence Between Smartphone-Native and Computer-Native Youth
- URL: http://arxiv.org/abs/2508.03705v1
- Date: Sun, 20 Jul 2025 20:04:44 GMT
- Title: Screen Matters: Cognitive and Behavioral Divergence Between Smartphone-Native and Computer-Native Youth
- Authors: Kanan Eldarov,
- Abstract summary: We analyzed data from a diverse sample of 824 students aged 11-17.<n>Results suggest moderate but statistically significant differences in sustained attention, perceived frustration, and creative output.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: This study explores how different modes of digital interaction -- namely, computers versus smartphones -- affect attention, frustration, and creative performance in adolescents. Using a combination of digital task logs, webcam-based gaze estimation, and expert evaluation of task outcomes, we analyzed data from a diverse sample of 824 students aged 11-17. Participants were assigned to device groups in a randomized and stratified design to control for age, gender, and prior experience. Results suggest moderate but statistically significant differences in sustained attention, perceived frustration, and creative output. These findings indicate that the nature of digital interaction -- beyond mere screen time -- may influence cognitive and behavioral outcomes relevant to educational design. Practical implications for user interface development and learning environments are discussed.
Related papers
- Faces of the Mind: Unveiling Mental Health States Through Facial Expressions in 11,427 Adolescents [6.403533696512409]
Mood disorders such as depression and anxiety often manifest through facial expressions.<n>Existing machine learning algorithms designed to assess these disorders have been hindered by small datasets and limited real-world applicability.
arXiv Detail & Related papers (2024-05-30T14:02:40Z) - Biometrics and Behavior Analysis for Detecting Distractions in e-Learning [12.49745170391342]
This article explores computer vision approaches to detect abnormal head pose during e-learning sessions.
We propose an approach designed to detect deviations in head posture from the average observed during a learner's session.
arXiv Detail & Related papers (2024-05-24T11:02:55Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - From Learning Management System to Affective Tutoring system: a
preliminary study [0.0]
We analyzed data from two primary sources: digital traces extracted from th e Learning Management System (LMS) and images captured by students' webcams.
We observed a correlation between positive emotional states and improved academic outcomes.
arXiv Detail & Related papers (2023-11-09T16:52:44Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - Co-Located Human-Human Interaction Analysis using Nonverbal Cues: A
Survey [71.43956423427397]
We aim to identify the nonverbal cues and computational methodologies resulting in effective performance.
This survey differs from its counterparts by involving the widest spectrum of social phenomena and interaction settings.
Some major observations are: the most often used nonverbal cue, computational method, interaction environment, and sensing approach are speaking activity, support vector machines, and meetings composed of 3-4 persons equipped with microphones and cameras, respectively.
arXiv Detail & Related papers (2022-07-20T13:37:57Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - Inferring User Facial Affect in Work-like Settings [5.630425653717262]
We aim to infer user facial affect when the user is engaged in multiple work-like tasks under varying difficulty levels.
We first design a study with different conditions and gather multimodal data from 12 subjects.
We then perform several experiments with various machine learning models and find that the display and prediction of facial affect vary from non-working to working settings.
arXiv Detail & Related papers (2021-11-22T01:23:46Z) - Affect Analysis in-the-wild: Valence-Arousal, Expressions, Action Units
and a Unified Framework [83.21732533130846]
The paper focuses on large in-the-wild databases, i.e., Aff-Wild and Aff-Wild2.
It presents the design of two classes of deep neural networks trained with these databases.
A novel multi-task and holistic framework is presented which is able to jointly learn and effectively generalize and perform affect recognition.
arXiv Detail & Related papers (2021-03-29T17:36:20Z) - Understanding health and behavioral trends of successful students
through machine learning models [11.615686353864374]
This study analyzes patterns of physical, mental, lifestyle, and personality factors in college students in different periods over the course of a semester.
The data analyzed was collected through smartphones and Fitbit.
arXiv Detail & Related papers (2021-01-23T17:18:17Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z) - Student Engagement Detection Using Emotion Analysis, Eye Tracking and
Head Movement with Machine Learning [0.0]
We present a system to detect the engagement level of the students.
It uses only information provided by the typical built-in web-camera present in a laptop computer.
arXiv Detail & Related papers (2019-09-18T15:46:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.