Validating the Effectiveness of a Large Language Model-based Approach for Identifying Children's Development across Various Free Play Settings in Kindergarten
- URL: http://arxiv.org/abs/2505.03369v1
- Date: Tue, 06 May 2025 09:40:47 GMT
- Title: Validating the Effectiveness of a Large Language Model-based Approach for Identifying Children's Development across Various Free Play Settings in Kindergarten
- Authors: Yuanyuan Yang, Yuan Shen, Tianchen Sun, Yangbin Xie,
- Abstract summary: This study proposes an innovative approach combining Large Language Models (LLMs) with learning analytics to analyze children's self-narratives of their play experiences.<n>We collected 2,224 play narratives from 29 children in a kindergarten, covering four distinct play areas over one semester.
- Score: 21.44110449320235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Free play is a fundamental aspect of early childhood education, supporting children's cognitive, social, emotional, and motor development. However, assessing children's development during free play poses significant challenges due to the unstructured and spontaneous nature of the activity. Traditional assessment methods often rely on direct observations by teachers, parents, or researchers, which may fail to capture comprehensive insights from free play and provide timely feedback to educators. This study proposes an innovative approach combining Large Language Models (LLMs) with learning analytics to analyze children's self-narratives of their play experiences. The LLM identifies developmental abilities, while performance scores across different play settings are calculated using learning analytics techniques. We collected 2,224 play narratives from 29 children in a kindergarten, covering four distinct play areas over one semester. According to the evaluation results from eight professionals, the LLM-based approach achieved high accuracy in identifying cognitive, motor, and social abilities, with accuracy exceeding 90% in most domains. Moreover, significant differences in developmental outcomes were observed across play settings, highlighting each area's unique contributions to specific abilities. These findings confirm that the proposed approach is effective in identifying children's development across various free play settings. This study demonstrates the potential of integrating LLMs and learning analytics to provide child-centered insights into developmental trajectories, offering educators valuable data to support personalized learning and enhance early childhood education practices.
Related papers
- Unveiling the Learning Mind of Language Models: A Cognitive Framework and Empirical Study [50.065744358362345]
Large language models (LLMs) have shown impressive capabilities across tasks such as mathematics, coding, and reasoning.<n>Yet their learning ability, which is crucial for adapting to dynamic environments and acquiring new knowledge, remains underexplored.
arXiv Detail & Related papers (2025-06-16T13:24:50Z) - Applying LLM-Powered Virtual Humans to Child Interviews in Child-Centered Design [0.0]
This study establishes key design guidelines for LLM-powered virtual humans tailored to child interviews.<n>Using ChatGPT-based prompt engineering, we developed three distinct Human-AI (LLM-Auto, LLM-Interview, and LLM-Analyze)<n>Results indicated that the LLM-Analyze workflow outperformed the others by eliciting longer responses.
arXiv Detail & Related papers (2025-04-28T17:35:46Z) - Exploring Natural Language-Based Strategies for Efficient Number Learning in Children through Reinforcement Learning [0.0]
This paper investigates how children learn numbers using the framework of reinforcement learning (RL)
By using state of the art deep reinforcement learning models, we simulate and analyze the effects of various forms of language instructions on number acquisition.
arXiv Detail & Related papers (2024-10-10T19:49:13Z) - KidLM: Advancing Language Models for Children -- Early Insights and Future Directions [7.839083566878183]
We introduce a novel user-centric data collection pipeline that involves gathering and validating a corpus specifically written for and sometimes by children.
We propose a new training objective, Stratified Masking, which dynamically adjusts masking probabilities based on our domain-specific child language data.
Experimental evaluations demonstrate that our model excels in understanding lower grade-level text, maintains safety by avoiding stereotypes, and captures children's unique preferences.
arXiv Detail & Related papers (2024-10-04T19:35:44Z) - Evaluating and Optimizing Educational Content with Large Language Model Judgments [52.33701672559594]
We use Language Models (LMs) as educational experts to assess the impact of various instructions on learning outcomes.
We introduce an instruction optimization approach in which one LM generates instructional materials using the judgments of another LM as a reward function.
Human teachers' evaluations of these LM-generated worksheets show a significant alignment between the LM judgments and human teacher preferences.
arXiv Detail & Related papers (2024-03-05T09:09:15Z) - Exploring the Cognitive Knowledge Structure of Large Language Models: An
Educational Diagnostic Assessment Approach [50.125704610228254]
Large Language Models (LLMs) have not only exhibited exceptional performance across various tasks, but also demonstrated sparks of intelligence.
Recent studies have focused on assessing their capabilities on human exams and revealed their impressive competence in different domains.
We conduct an evaluation using MoocRadar, a meticulously annotated human test dataset based on Bloom taxonomy.
arXiv Detail & Related papers (2023-10-12T09:55:45Z) - Comparing Machines and Children: Using Developmental Psychology
Experiments to Assess the Strengths and Weaknesses of LaMDA Responses [0.02999888908665658]
We adapt classical developmental experiments to evaluate the capabilities of LaMDA, a large language model from Google.
We find that LaMDA generates appropriate responses similar to those of children in experiments involving social understanding.
On the other hand, LaMDA's responses in early object and action understanding, theory of mind, and especially causal reasoning tasks are very different from those of young children.
arXiv Detail & Related papers (2023-05-18T18:15:43Z) - Responsible Active Learning via Human-in-the-loop Peer Study [88.01358655203441]
We propose a responsible active learning method, namely Peer Study Learning (PSL), to simultaneously preserve data privacy and improve model stability.
We first introduce a human-in-the-loop teacher-student architecture to isolate unlabelled data from the task learner (teacher) on the cloud-side.
During training, the task learner instructs the light-weight active learner which then provides feedback on the active sampling criterion.
arXiv Detail & Related papers (2022-11-24T13:18:27Z) - Causal Imitation Learning with Unobserved Confounders [82.22545916247269]
We study imitation learning when sensory inputs of the learner and the expert differ.
We show that imitation could still be feasible by exploiting quantitative knowledge of the expert trajectories.
arXiv Detail & Related papers (2022-08-12T13:29:53Z) - ChildCI Framework: Analysis of Motor and Cognitive Development in Children-Computer Interaction for Age Detection [4.442846744776511]
This article presents a comprehensive analysis of the different tests proposed in the recent ChildCI framework.
We propose a set of over 100 global features related to motor and cognitive aspects of the children interaction with mobile devices.
Results over 93% accuracy are achieved using the publicly available ChildCIdb_v1 database.
arXiv Detail & Related papers (2022-04-08T18:03:39Z) - Personalized Education in the AI Era: What to Expect Next? [76.37000521334585]
The objective of personalized learning is to design an effective knowledge acquisition track that matches the learner's strengths and bypasses her weaknesses to meet her desired goal.
In recent years, the boost of artificial intelligence (AI) and machine learning (ML) has unfolded novel perspectives to enhance personalized education.
arXiv Detail & Related papers (2021-01-19T12:23:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.