On the Origins of Self-Modeling
- URL: http://arxiv.org/abs/2209.02010v1
- Date: Mon, 5 Sep 2022 15:27:04 GMT
- Title: On the Origins of Self-Modeling
- Authors: Robert Kwiatkowski, Yuhang Hu, Boyuan Chen, Hod Lipson
- Abstract summary: Self-Modeling is the process by which an agent, such as an animal or machine, learns to create a predictive model of its own dynamics.
Here, we quantify the benefits of such self-modeling against the complexity of the robot.
- Score: 27.888203008100113
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-Modeling is the process by which an agent, such as an animal or machine,
learns to create a predictive model of its own dynamics. Once captured, this
self-model can then allow the agent to plan and evaluate various potential
behaviors internally using the self-model, rather than using costly physical
experimentation. Here, we quantify the benefits of such self-modeling against
the complexity of the robot. We find a R2 =0.90 correlation between the number
of degrees of freedom a robot has, and the added value of self-modeling as
compared to a direct learning baseline. This result may help motivate self
modeling in increasingly complex robotic systems, as well as shed light on the
origins of self-modeling, and ultimately self-awareness, in animals and humans.
Related papers
- Unexpected Benefits of Self-Modeling in Neural Systems [0.7179624965454197]
We show that when artificial networks learn to predict their internal states as an auxiliary task, they change in a fundamental way.
To better perform the self-model task, the network learns to make itself simpler, more regularized, more parameter-efficient.
This self-regularization may help explain some of the benefits of self-models reported in recent machine learning literature.
arXiv Detail & Related papers (2024-07-14T13:16:23Z) - AutoRT: Embodied Foundation Models for Large Scale Orchestration of Robotic Agents [109.3804962220498]
AutoRT is a system to scale up the deployment of operational robots in completely unseen scenarios with minimal human supervision.
We demonstrate AutoRT proposing instructions to over 20 robots across multiple buildings and collecting 77k real robot episodes via both teleoperation and autonomous robot policies.
We experimentally show that such "in-the-wild" data collected by AutoRT is significantly more diverse, and that AutoRT's use of LLMs allows for instruction following data collection robots that can align to human preferences.
arXiv Detail & Related papers (2024-01-23T18:45:54Z) - Robot at the Mirror: Learning to Imitate via Associating Self-supervised
Models [0.0]
We introduce an approach to building a custom model from ready-made self-supervised models via their associating instead of training and fine-tuning.
We demonstrate it with an example of a humanoid robot looking at the mirror and learning to detect the 3D pose of its own body from the image it perceives.
arXiv Detail & Related papers (2023-11-22T08:30:20Z) - High-Degrees-of-Freedom Dynamic Neural Fields for Robot Self-Modeling and Motion Planning [6.229216953398305]
A robot self-model is a representation of the robot's physical morphology that can be used for motion planning tasks.
We propose a new encoder-based neural density field architecture for dynamic object-centric scenes conditioned on high numbers of degrees of freedom.
In a 7-DOF robot test setup, the learned self-model achieves a Chamfer-L2 distance of 2% of the robot's dimension workspace.
arXiv Detail & Related papers (2023-10-05T16:01:29Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Learning body models: from humans to humanoids [2.855485723554975]
Humans and animals excel in combining information from multiple sensory modalities, controlling their complex bodies, adapting to growth, failures, or using tools.
Key foundation is an internal representation of the body that the agent - human, animal, or robot - has developed.
mechanisms of operation of body models in the brain are largely unknown and even less is known about how they are constructed from experience after birth.
arXiv Detail & Related papers (2022-11-06T07:30:01Z) - A Capability and Skill Model for Heterogeneous Autonomous Robots [69.50862982117127]
capability modeling is considered a promising approach to semantically model functions provided by different machines.
This contribution investigates how to apply and extend capability models from manufacturing to the field of autonomous robots.
arXiv Detail & Related papers (2022-09-22T10:13:55Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Full-Body Visual Self-Modeling of Robot Morphologies [29.76701883250049]
Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions.
Recent progress in fully data-driven self-modeling has enabled machines to learn their own forward kinematics directly from task-agnostic interaction data.
Here, we propose that instead of directly modeling forward-kinematics, a more useful form of self-modeling is one that could answer space occupancy queries.
arXiv Detail & Related papers (2021-11-11T18:58:07Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.