ChildBot: Multi-Robot Perception and Interaction with Children
- URL: http://arxiv.org/abs/2008.12818v1
- Date: Fri, 28 Aug 2020 19:07:28 GMT
- Title: ChildBot: Multi-Robot Perception and Interaction with Children
- Authors: Niki Efthymiou, Panagiotis P. Filntisis, Petros Koutras, Antigoni
Tsiami, Jack Hadfield, Gerasimos Potamianos, Petros Maragos
- Abstract summary: We present an integrated robotic system capable of participating in and performing a wide range of educational and entertainment tasks.
ChildBot features multimodal perception modules and multiple robotic agents that monitor the interaction environment.
- Score: 43.08980479118157
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we present an integrated robotic system capable of
participating in and performing a wide range of educational and entertainment
tasks, in collaboration with one or more children. The system, called ChildBot,
features multimodal perception modules and multiple robotic agents that monitor
the interaction environment, and can robustly coordinate complex Child-Robot
Interaction use-cases. In order to validate the effectiveness of the system and
its integrated modules, we have conducted multiple experiments with a total of
52 children. Our results show improved perception capabilities in comparison to
our earlier works that ChildBot was based on. In addition, we have conducted a
preliminary user experience study, employing some educational/entertainment
tasks, that yields encouraging results regarding the technical validity of our
system and initial insights on the user experience with it.
Related papers
- Vocal Sandbox: Continual Learning and Adaptation for Situated Human-Robot Collaboration [64.6107798750142]
Vocal Sandbox is a framework for enabling seamless human-robot collaboration in situated environments.
We design lightweight and interpretable learning algorithms that allow users to build an understanding and co-adapt to a robot's capabilities in real-time.
We evaluate Vocal Sandbox in two settings: collaborative gift bag assembly and LEGO stop-motion animation.
arXiv Detail & Related papers (2024-11-04T20:44:40Z) - Coherence-Driven Multimodal Safety Dialogue with Active Learning for Embodied Agents [23.960719833886984]
M-CoDAL is a multimodal-dialogue system specifically designed for embodied agents to better understand and communicate in safety-critical situations.
Our approach is evaluated using a newly created multimodal dataset comprising 1K safety violations extracted from 2K Reddit images.
Results with this dataset demonstrate that our approach improves resolution of safety situations, user sentiment, as well as safety of the conversation.
arXiv Detail & Related papers (2024-10-18T03:26:06Z) - PIMbot: Policy and Incentive Manipulation for Multi-Robot Reinforcement
Learning in Social Dilemmas [4.566617428324801]
This paper presents a novel approach, namely PIMbot, to manipulating the reward function in multi-robot collaboration.
By utilizing our proposed PIMbot mechanisms, a robot is able to manipulate the social dilemma environment effectively.
Our work provides insights into how inter-robot communication can be manipulated and has implications for various robotic applications.
arXiv Detail & Related papers (2023-07-29T09:34:45Z) - Generalizable Human-Robot Collaborative Assembly Using Imitation
Learning and Force Control [17.270360447188196]
We present a system for human-robot collaborative assembly using learning from demonstration and pose estimation.
The proposed system is demonstrated using a physical 6 DoF manipulator in a collaborative human-robot assembly scenario.
arXiv Detail & Related papers (2022-12-02T20:35:55Z) - StoryBuddy: A Human-AI Collaborative Chatbot for Parent-Child
Interactive Storytelling with Flexible Parental Involvement [61.47157418485633]
We developed StoryBuddy, an AI-enabled system for parents to create interactive storytelling experiences.
A user study validated StoryBuddy's usability and suggested design insights for future parent-AI collaboration systems.
arXiv Detail & Related papers (2022-02-13T04:53:28Z) - Lifelong Robotic Reinforcement Learning by Retaining Experiences [61.79346922421323]
Many multi-task reinforcement learning efforts assume the robot can collect data from all tasks at all times.
In this work, we study a practical sequential multi-task RL problem motivated by the practical constraints of physical robotic systems.
We derive an approach that effectively leverages the data and policies learned for previous tasks to cumulatively grow the robot's skill-set.
arXiv Detail & Related papers (2021-09-19T18:00:51Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - Child-Computer Interaction: Recent Works, New Dataset, and Age Detection [6.061943386819384]
ChildCI aims to generate a better understanding of the cognitive and neuromotor development of children while interacting with mobile devices.
In our framework children interact with a tablet device, using both a pen stylus and the finger, performing different tasks that require different levels of neuromotor and cognitive skills.
ChildCIdb comprises more than 400 children from 18 months to 8 years old, considering therefore the first three development stages of the Piaget's theory.
arXiv Detail & Related papers (2021-02-02T09:51:58Z) - A robot that counts like a child: a developmental model of counting and
pointing [69.26619423111092]
A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
arXiv Detail & Related papers (2020-08-05T21:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.