Body models in humans, animals, and robots: mechanisms and plasticity
- URL: http://arxiv.org/abs/2010.09325v2
- Date: Fri, 24 Sep 2021 15:33:23 GMT
- Title: Body models in humans, animals, and robots: mechanisms and plasticity
- Authors: Matej Hoffmann
- Abstract summary: Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
In robotics, a model of the robot is an indispensable component that enables to control the machine.
- Score: 2.855485723554975
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans and animals excel in combining information from multiple sensory
modalities, controlling their complex bodies, adapting to growth, failures, or
using tools. These capabilities are also highly desirable in robots. They are
displayed by machines to some extent - yet, as is so often the case, the
artificial creatures are lagging behind. The key foundation is an internal
representation of the body that the agent - human, animal, or robot - has
developed. In the biological realm, evidence has been accumulated by diverse
disciplines giving rise to the concepts of body image, body schema, and others.
In robotics, a model of the robot is an indispensable component that enables to
control the machine. In this article I compare the character of body
representations in biology with their robotic counterparts and relate that to
the differences in performance that we observe. I put forth a number of axes
regarding the nature of such body models: fixed vs. plastic, amodal vs. modal,
explicit vs. implicit, serial vs. parallel, modular vs. holistic, and
centralized vs. distributed. An interesting trend emerges: on many of the axes,
there is a sequence from robot body models, over body image, body schema, to
the body representation in lower animals like the octopus. In some sense,
robots have a lot in common with Ian Waterman - "the man who lost his body" -
in that they rely on an explicit, veridical body model (body image taken to the
extreme) and lack any implicit, multimodal representation (like the body
schema) of their bodies. I will then detail how robots can inform the
biological sciences dealing with body representations and finally, I will study
which of the features of the "body in the brain" should be transferred to
robots, giving rise to more adaptive and resilient, self-calibrating machines.
Related papers
- GeMuCo: Generalized Multisensory Correlational Model for Body Schema Learning [18.64205729932939]
Humans can learn the relationship between sensation and motion in their own bodies.
Current robots control their bodies by learning the network structure described by humans from their experiences.
arXiv Detail & Related papers (2024-09-10T11:19:13Z) - HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Self Model for Embodied Intelligence: Modeling Full-Body Human Musculoskeletal System and Locomotion Control with Hierarchical Low-Dimensional Representation [22.925312305575183]
We build a musculoskeletal model (MS-Human-700) with 90 body segments, 206 joints, and 700 muscle-tendon units.
We develop a new algorithm using low-dimensional representation and hierarchical deep reinforcement learning to achieve state-of-the-art full-body control.
arXiv Detail & Related papers (2023-12-09T05:42:32Z) - Giving Robots a Hand: Learning Generalizable Manipulation with
Eye-in-Hand Human Video Demonstrations [66.47064743686953]
Eye-in-hand cameras have shown promise in enabling greater sample efficiency and generalization in vision-based robotic manipulation.
Videos of humans performing tasks, on the other hand, are much cheaper to collect since they eliminate the need for expertise in robotic teleoperation.
In this work, we augment narrow robotic imitation datasets with broad unlabeled human video demonstrations to greatly enhance the generalization of eye-in-hand visuomotor policies.
arXiv Detail & Related papers (2023-07-12T07:04:53Z) - Learning body models: from humans to humanoids [2.855485723554975]
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth.
arXiv Detail & Related papers (2022-11-06T07:30:01Z) - A Capability and Skill Model for Heterogeneous Autonomous Robots [69.50862982117127]
capability modeling is considered a promising approach to semantically model functions provided by different machines.
This contribution investigates how to apply and extend capability models from manufacturing to the field of autonomous robots.
arXiv Detail & Related papers (2022-09-22T10:13:55Z) - Robots with Different Embodiments Can Express and Influence Carefulness
in Object Manipulation [104.5440430194206]
This work investigates the perception of object manipulations performed with a communicative intent by two robots.
We designed the robots' movements to communicate carefulness or not during the transportation of objects.
arXiv Detail & Related papers (2022-08-03T13:26:52Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - Synthesis and Execution of Communicative Robotic Movements with
Generative Adversarial Networks [59.098560311521034]
We focus on how to transfer on two different robotic platforms the same kinematics modulation that humans adopt when manipulating delicate objects.
We choose to modulate the velocity profile adopted by the robots' end-effector, inspired by what humans do when transporting objects with different characteristics.
We exploit a novel Generative Adversarial Network architecture, trained with human kinematics examples, to generalize over them and generate new and meaningful velocity profiles.
arXiv Detail & Related papers (2022-03-29T15:03:05Z) - Neuroscience-inspired perception-action in robotics: applying active
inference for state estimation, control and self-perception [2.1067139116005595]
We discuss how neuroscience findings open up opportunities to improve current estimation and control algorithms in robotics.
This paper summarizes some experiments and lessons learned from developing such a computational model on real embodied platforms.
arXiv Detail & Related papers (2021-05-10T10:59:38Z) - Sensorimotor representation learning for an "active self" in robots: A
model survey [10.649413494649293]
In humans, these capabilities are thought to be related to our ability to perceive our body in space.
This paper reviews the developmental processes of underlying mechanisms of these abilities.
We propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents.
arXiv Detail & Related papers (2020-11-25T16:31:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.