Robot self/other distinction: active inference meets neural networks
learning in a mirror
- URL: http://arxiv.org/abs/2004.05473v1
- Date: Sat, 11 Apr 2020 19:51:47 GMT
- Title: Robot self/other distinction: active inference meets neural networks
learning in a mirror
- Authors: Pablo Lanillos and Jordi Pages and Gordon Cheng
- Abstract summary: We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror.
The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning.
Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions.
- Score: 9.398766540452632
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self/other distinction and self-recognition are important skills for
interacting with the world, as it allows humans to differentiate own actions
from others and be self-aware. However, only a selected group of animals,
mainly high order mammals such as humans, has passed the mirror test, a
behavioural experiment proposed to assess self-recognition abilities. In this
paper, we describe self-recognition as a process that is built on top of body
perception unconscious mechanisms. We present an algorithm that enables a robot
to perform non-appearance self-recognition on a mirror and distinguish its
simple actions from other entities, by answering the following question: am I
generating these sensations? The algorithm combines active inference, a
theoretical model of perception and action in the brain, with neural network
learning. The robot learns the relation between its actions and its body with
the effect produced in the visual field and its body sensors. The prediction
error generated between the models and the real observations during the
interaction is used to infer the body configuration through free energy
minimization and to accumulate evidence for recognizing its body. Experimental
results on a humanoid robot show the reliability of the algorithm for different
initial conditions, such as mirror recognition in any perspective, robot-robot
distinction and human-robot differentiation.
Related papers
- Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - See, Hear, and Feel: Smart Sensory Fusion for Robotic Manipulation [49.925499720323806]
We study how visual, auditory, and tactile perception can jointly help robots to solve complex manipulation tasks.
We build a robot system that can see with a camera, hear with a contact microphone, and feel with a vision-based tactile sensor.
arXiv Detail & Related papers (2022-12-07T18:55:53Z) - Learning body models: from humans to humanoids [2.855485723554975]
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth.
arXiv Detail & Related papers (2022-11-06T07:30:01Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - AGENT: A Benchmark for Core Psychological Reasoning [60.35621718321559]
Intuitive psychology is the ability to reason about hidden mental variables that drive observable actions.
Despite recent interest in machine agents that reason about other agents, it is not clear if such agents learn or hold the core psychology principles that drive human reasoning.
We present a benchmark consisting of procedurally generated 3D animations, AGENT, structured around four scenarios.
arXiv Detail & Related papers (2021-02-24T14:58:23Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z) - Robot in the mirror: toward an embodied computational model of mirror
self-recognition [1.9686770963118383]
The mirror self-recognition test consists in covertly putting a mark on the face of the tested subject, placing her in front of a mirror, and observing the reactions.
In this work, we provide a mechanistic decomposition, or process model, of what components are required to pass this test.
We develop a model to enable the humanoid robot Nao to pass the test.
arXiv Detail & Related papers (2020-11-09T15:11:31Z) - Human Perception of Intrinsically Motivated Autonomy in Human-Robot
Interaction [2.485182034310304]
A challenge in using robots in human-inhabited environments is to design behavior that is engaging, yet robust to the perturbations induced by human interaction.
Our idea is to imbue the robot with intrinsic motivation (IM) so that it can handle new situations and appears as a genuine social other to humans.
This article presents a "robotologist" study design that allows comparing autonomously generated behaviors with each other.
arXiv Detail & Related papers (2020-02-14T09:49:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.