A robot that counts like a child: a developmental model of counting and
pointing
- URL: http://arxiv.org/abs/2008.02366v2
- Date: Mon, 2 Nov 2020 23:48:30 GMT
- Title: A robot that counts like a child: a developmental model of counting and
pointing
- Authors: Leszek Pecyna, Angelo Cangelosi, Alessandro Di Nuovo
- Abstract summary: A novel neuro-robotics model capable of counting real items is introduced.
The model allows us to investigate the interaction between embodiment and numerical cognition.
The trained model is able to count a set of items and at the same time points to them.
- Score: 69.26619423111092
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, a novel neuro-robotics model capable of counting real items is
introduced. The model allows us to investigate the interaction between
embodiment and numerical cognition. This is composed of a deep neural network
capable of image processing and sequential tasks performance, and a robotic
platform providing the embodiment - the iCub humanoid robot. The network is
trained using images from the robot's cameras and proprioceptive signals from
its joints. The trained model is able to count a set of items and at the same
time points to them. We investigate the influence of pointing on the counting
process and compare our results with those from studies with children. Several
training approaches are presented in this paper all of them uses pre-training
routine allowing the network to gain the ability of pointing and number
recitation (from 1 to 10) prior to counting training. The impact of the counted
set size and distance to the objects are investigated. The obtained results on
counting performance show similarities with those from human studies.
Related papers
- Real-time Addressee Estimation: Deployment of a Deep-Learning Model on
the iCub Robot [52.277579221741746]
Addressee Estimation is a skill essential for social robots to interact smoothly with humans.
Inspired by human perceptual skills, a deep-learning model for Addressee Estimation is designed, trained, and deployed on an iCub robot.
The study presents the procedure of such implementation and the performance of the model deployed in real-time human-robot interaction.
arXiv Detail & Related papers (2023-11-09T13:01:21Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Factored World Models for Zero-Shot Generalization in Robotic
Manipulation [7.258229016768018]
We learn to generalize over robotic pick-and-place tasks using object-factored world models.
We use a residual stack of graph neural networks that receive action information at multiple levels in both their node and edge neural networks.
We show that an ensemble of our models can be used to plan for tasks involving up to 12 pick and place actions using search.
arXiv Detail & Related papers (2022-02-10T21:26:11Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - A Number Sense as an Emergent Property of the Manipulating Brain [16.186932790845937]
We study the mechanism through which humans acquire and develop the ability to manipulate numbers and quantities.
Our model acquires the ability to estimate numerosity, i.e. the number of objects in the scene.
We conclude that important aspects of a facility with numbers and quantities may be learned with supervision from a simple pre-training task.
arXiv Detail & Related papers (2020-12-08T00:37:35Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.