Learning body models: from humans to humanoids
- URL: http://arxiv.org/abs/2211.03049v1
- Date: Sun, 6 Nov 2022 07:30:01 GMT
- Title: Learning body models: from humans to humanoids
- Authors: Matej Hoffmann
- Abstract summary: Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth.
- Score: 2.855485723554975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent. Yet, the artificial creatures are lagging
behind. The key foundation is an internal representation of the body that the
agent - human, animal, or robot - has developed. The mechanisms of operation of
body models in the brain are largely unknown and even less is known about how
they are constructed from experience after birth. In collaboration with
developmental psychologists, we conducted targeted experiments to understand
how infants acquire first "sensorimotor body knowledge". These experiments
inform our work in which we construct embodied computational models on humanoid
robots that address the mechanisms behind learning, adaptation, and operation
of multimodal body representations. At the same time, we assess which of the
features of the "body in the brain" should be transferred to robots to give
rise to more adaptive and resilient, self-calibrating machines. We extend
traditional robot kinematic calibration focusing on self-contained approaches
where no external metrology is needed: self-contact and self-observation.
Problem formulation allowing to combine several ways of closing the kinematic
chain simultaneously is presented, along with a calibration toolbox and
experimental validation on several robot platforms. Finally, next to models of
the body itself, we study peripersonal space - the space immediately
surrounding the body. Again, embodied computational models are developed and
subsequently, the possibility of turning these biologically inspired
representations into safe human-robot collaboration is studied.
Related papers
- GeMuCo: Generalized Multisensory Correlational Model for Body Schema Learning [18.64205729932939]
Humans can learn the relationship between sensation and motion in their own bodies.
Current robots control their bodies by learning the network structure described by humans from their experiences.
arXiv Detail & Related papers (2024-09-10T11:19:13Z) - HumanoidBench: Simulated Humanoid Benchmark for Whole-Body Locomotion and Manipulation [50.616995671367704]
We present a high-dimensional, simulated robot learning benchmark, HumanoidBench, featuring a humanoid robot equipped with dexterous hands.
Our findings reveal that state-of-the-art reinforcement learning algorithms struggle with most tasks, whereas a hierarchical learning approach achieves superior performance when supported by robust low-level policies.
arXiv Detail & Related papers (2024-03-15T17:45:44Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - HERD: Continuous Human-to-Robot Evolution for Learning from Human
Demonstration [57.045140028275036]
We show that manipulation skills can be transferred from a human to a robot through the use of micro-evolutionary reinforcement learning.
We propose an algorithm for multi-dimensional evolution path searching that allows joint optimization of both the robot evolution path and the policy.
arXiv Detail & Related papers (2022-12-08T15:56:13Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z) - Body models in humans, animals, and robots: mechanisms and plasticity [2.855485723554975]
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
In robotics, a model of the robot is an indispensable component that enables to control the machine.
arXiv Detail & Related papers (2020-10-19T09:07:11Z) - Robot self/other distinction: active inference meets neural networks
learning in a mirror [9.398766540452632]
We present an algorithm that enables a robot to perform non-appearance self-recognition on a mirror.
The algorithm combines active inference, a theoretical model of perception and action in the brain, with neural network learning.
Experimental results on a humanoid robot show the reliability of the algorithm for different initial conditions.
arXiv Detail & Related papers (2020-04-11T19:51:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.