XR Design Framework for Early Childhood Education
- URL: http://arxiv.org/abs/2601.18979v1
- Date: Mon, 26 Jan 2026 21:32:35 GMT
- Title: XR Design Framework for Early Childhood Education
- Authors: Supriya Khadka, Sanchari Das,
- Abstract summary: Extended Reality in early childhood education presents high-risk challenges due to children's rapid developmental changes.<n>While augmented and virtual reality offer immersive pedagogical benefits, they often impose excessive cognitive load or sensory conflict.<n>We introduce the Augmented Human Development framework to model these interactions through cognitive, sensory, environmental, and developmental parameters.
- Score: 9.133320151595084
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Extended Reality in early childhood education presents high-risk challenges due to children's rapid developmental changes. While augmented and virtual reality offer immersive pedagogical benefits, they often impose excessive cognitive load or sensory conflict. We introduce the Augmented Human Development (AHD) framework to model these interactions through cognitive, sensory, environmental, and developmental parameters. To ground this framework, we conducted a Systematization of Knowledge (SoK) of 111 peer-reviewed studies involving children aged 3 - 8. Our findings, interpreted through the AHD lens, reveal a critical "risk vs. attention gap," where high-impact safety and security risks remain under-researched compared to short-term pedagogical gains.
Related papers
- CASTLE: A Comprehensive Benchmark for Evaluating Student-Tailored Personalized Safety in Large Language Models [55.0103764229311]
We propose the concept of Student-Tailored Personalized Safety and construct CASTLE based on educational theories.<n>This benchmark covers 15 educational safety risks and 14 student attributes, comprising 92,908 bilingual scenarios.
arXiv Detail & Related papers (2026-02-05T13:13:19Z) - The Missing Half: Unveiling Training-time Implicit Safety Risks Beyond Deployment [148.80266237240713]
implicit training-time safety risks are driven by a model's internal incentives and contextual background information.<n>We present the first systematic study of this problem, introducing a taxonomy with five risk levels, ten fine-grained risk categories, and three incentive types.<n>Our results identify an overlooked yet urgent safety challenge in training.
arXiv Detail & Related papers (2026-02-04T04:23:58Z) - Responsible Diffusion: A Comprehensive Survey on Safety, Ethics, and Trust in Diffusion Models [69.22690439422531]
Diffusion models (DMs) have been investigated in various domains due to their ability to generate high-quality data.<n>Similar to traditional deep learning systems, there also exist potential threats to DMs.<n>This survey comprehensively elucidates its framework, threats, and countermeasures.
arXiv Detail & Related papers (2025-09-25T02:51:43Z) - Deep Learning Based Approach to Enhanced Recognition of Emotions and Behavioral Patterns of Autistic Children [0.0]
This study aims to establish a baseline understanding of the unique needs and challenges faced by autistic students.<n>By shifting the focus toward early identification of behavioral patterns, we aim to foster a more inclusive and supportive learning environment.
arXiv Detail & Related papers (2025-08-29T05:50:47Z) - SproutBench: A Benchmark for Safe and Ethical Large Language Models for Youth [14.569766143989531]
The rapid proliferation of large language models (LLMs) in applications targeting children and adolescents necessitates a fundamental reassessment of prevailing AI safety frameworks.<n>This paper highlights key deficiencies in existing LLM safety benchmarks, including their inadequate coverage of age-specific cognitive, emotional, and social risks.<n>We introduce SproutBench, an innovative evaluation suite comprising 1,283 developmentally grounded adversarial prompts designed to probe risks such as emotional dependency, privacy violations, and imitation of hazardous behaviors.
arXiv Detail & Related papers (2025-08-14T18:21:39Z) - Towards Trustworthy Retrieval Augmented Generation for Large Language Models: A Survey [92.36487127683053]
Retrieval-Augmented Generation (RAG) is an advanced technique designed to address the challenges of Artificial Intelligence-Generated Content (AIGC)<n>RAG provides reliable and up-to-date external knowledge, reduces hallucinations, and ensures relevant context across a wide range of tasks.<n>Despite RAG's success and potential, recent studies have shown that the RAG paradigm also introduces new risks, including privacy concerns, adversarial attacks, and accountability issues.
arXiv Detail & Related papers (2025-02-08T06:50:47Z) - Enhancing Autism Spectrum Disorder Early Detection with the Parent-Child Dyads Block-Play Protocol and an Attention-enhanced GCN-xLSTM Hybrid Deep Learning Framework [6.785167067600156]
This work proposes a novel Parent-Child Dyads Block-Play (PCB) protocol to identify behavioral patterns distinguishing ASD from typically developing toddlers.
We have compiled a substantial video dataset, featuring 40 ASD and 89 TD toddlers engaged in block play with parents.
This dataset exceeds previous efforts on both the scale of participants and the length of individual sessions.
arXiv Detail & Related papers (2024-08-29T21:53:01Z) - SYNCS: Synthetic Data and Contrastive Self-Supervised Training for Central Sulcus Segmentation [0.09208007322096533]
The Danish High Risk and Resilience Study (VIA) focuses on understanding early disease processes, particularly in children with familial high risk (FHR)
The central sulcus (CS) is a prominent brain landmark related to brain regions involved in motor and sensory processing.
This study introduces two novel approaches to improve CS segmentation: synthetic data generation to model CS variability and self-supervised pre-training with multi-task learning to adapt models to new cohorts.
arXiv Detail & Related papers (2024-03-22T11:24:31Z) - Risks of AI Scientists: Prioritizing Safeguarding Over Autonomy [65.77763092833348]
This perspective examines vulnerabilities in AI scientists, shedding light on potential risks associated with their misuse.<n>We take into account user intent, the specific scientific domain, and their potential impact on the external environment.<n>We propose a triadic framework involving human regulation, agent alignment, and an understanding of environmental feedback.
arXiv Detail & Related papers (2024-02-06T18:54:07Z) - Assessing the Spatial Structure of the Association between Attendance at
Preschool and Childrens Developmental Vulnerabilities in Queensland Australia [0.0]
The research explores the influence of preschool attendance on the development of children during their first year of school.
Using data collected by the Australian Early Development Census, the findings show that areas with high proportions of preschool attendance tended to have lower proportions of children with at least one developmental vulnerability.
arXiv Detail & Related papers (2023-05-25T05:52:05Z) - Towards Safer Generative Language Models: A Survey on Safety Risks,
Evaluations, and Improvements [76.80453043969209]
This survey presents a framework for safety research pertaining to large models.
We begin by introducing safety issues of wide concern, then delve into safety evaluation methods for large models.
We explore the strategies for enhancing large model safety from training to deployment.
arXiv Detail & Related papers (2023-02-18T09:32:55Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.