Robots and Children that Learn Together : Improving Knowledge Retention by Teaching Peer-Like Interactive Robots
- URL: http://arxiv.org/abs/2506.18365v1
- Date: Mon, 23 Jun 2025 07:51:04 GMT
- Title: Robots and Children that Learn Together : Improving Knowledge Retention by Teaching Peer-Like Interactive Robots
- Authors: Imene Tarakli, Samuele Vinanzi, Richard Moore, Alessandro Di Nuovo,
- Abstract summary: This study introduces Interactive Reinforcement Learning as a cognitive model for teachable social robots.<n>Children in the LbT condition achieved significantly higher retention gains compared to those in the self-practice condition.<n>This work makes two contributions: (1) it introduces Interactive RL as a pedagogically effective and scalable model for peer-robot learning, and (2) it demonstrates, for the first time, the feasibility of deploying multiple autonomous robots simultaneously in real classrooms.
- Score: 41.94295877935867
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite growing interest in Learning-by-Teaching (LbT), few studies have explored how this paradigm can be implemented with autonomous, peer-like social robots in real classrooms. Most prior work has relied on scripted or Wizard-of-Oz behaviors, limiting our understanding of how real-time, interactive learning can be supported by artificial agents. This study addresses this gap by introducing Interactive Reinforcement Learning (RL) as a cognitive model for teachable social robots. We conducted two between-subject experiments with 58 primary school children, who either taught a robot or practiced independently on a tablet while learning French vocabulary (memorization) and grammatical rules (inference). The robot, powered by Interactive RL, learned from the child's evaluative feedback. Children in the LbT condition achieved significantly higher retention gains compared to those in the self-practice condition, especially on the grammar task. Learners with lower prior knowledge benefited most from teaching the robot. Behavioural metrics revealed that children adapted their teaching strategies over time and engaged more deeply during inference tasks. This work makes two contributions: (1) it introduces Interactive RL as a pedagogically effective and scalable model for peer-robot learning, and (2) it demonstrates, for the first time, the feasibility of deploying multiple autonomous robots simultaneously in real classrooms. These findings extend theoretical understanding of LbT by showing that social robots can function not only as passive tutees but as adaptive partners that enhance meta-cognitive engagement and long-term learning outcomes.
Related papers
- SIME: Enhancing Policy Self-Improvement with Modal-level Exploration [49.86173021151849]
We identify the key to successful self-improvement: modal-level exploration and data selection.<n>By incorporating a modal-level exploration mechanism during policy execution, the robot can produce more diverse and multi-modal interactions.<n>We successfully demonstrate effective robot self-improvement on both simulation benchmarks and real-world experiments.
arXiv Detail & Related papers (2025-05-02T17:13:03Z) - SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Human-Robot Mutual Learning through Affective-Linguistic Interaction and Differential Outcomes Training [Pre-Print] [0.3811184252495269]
We test how affective-linguistic communication, in combination with differential outcomes training, affects mutual learning in a human-robot context.
Taking inspiration from child- caregiver dynamics, our human-robot interaction setup consists of a (simulated) robot attempting to learn how best to communicate internal, homeostatically-controlled needs.
arXiv Detail & Related papers (2024-07-01T13:35:08Z) - Advancing Household Robotics: Deep Interactive Reinforcement Learning for Efficient Training and Enhanced Performance [0.0]
Reinforcement learning, or RL, has emerged as a key robotics technology that enables robots to interact with their environment.
We present a novel method to preserve and reuse information and advice via Deep Interactive Reinforcement Learning.
arXiv Detail & Related papers (2024-05-29T01:46:50Z) - Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - A Human-Robot Mutual Learning System with Affect-Grounded Language
Acquisition and Differential Outcomes Training [0.1812164955222814]
The paper presents a novel human-robot interaction setup for identifying robot homeostatic needs.
We adopted a differential outcomes training protocol whereby the robot provides feedback specific to its internal needs.
We found evidence that DOT can enhance the human's learning efficiency, which in turn enables more efficient robot language acquisition.
arXiv Detail & Related papers (2023-10-20T09:41:31Z) - Mimicking the Maestro: Exploring the Efficacy of a Virtual AI Teacher in
Fine Motor Skill Acquisition [3.07176124710244]
Motor skills, especially fine motor skills like handwriting, play an essential role in academic pursuits and everyday life.
Traditional methods to teach these skills, although effective, can be time-consuming and inconsistent.
We introduce an AI teacher model that captures the distinct characteristics of human instructors.
arXiv Detail & Related papers (2023-10-16T11:11:43Z) - How Do Human Users Teach a Continual Learning Robot in Repeated
Interactions? [7.193217430660011]
We take a human-centered approach to continual learning, to understand how humans teach continual learning robots over the long term.
We conducted an in-person study with 40 participants that interacted with a continual learning robot in 200 sessions.
An extensive qualitative and quantitative analysis of the data collected in the study shows that there is significant variation among the teaching styles of individual users.
arXiv Detail & Related papers (2023-06-30T20:29:48Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.