Teaching Robots to Build Simulations of Themselves
- URL: http://arxiv.org/abs/2311.12151v1
- Date: Mon, 20 Nov 2023 20:03:34 GMT
- Title: Teaching Robots to Build Simulations of Themselves
- Authors: Yuhang Hu, Jiong Lin, Hod Lipson
- Abstract summary: We introduce a self-supervised learning framework to enable robots model and predict their morphology, kinematics and motor control using only brief raw video data.
By observing their own movements, robots learn an ability to simulate themselves and predict their spatial motion for various tasks.
- Score: 7.886658271375681
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation enables robots to plan and estimate the outcomes of prospective
actions without the need to physically execute them. We introduce a
self-supervised learning framework to enable robots model and predict their
morphology, kinematics and motor control using only brief raw video data,
eliminating the need for extensive real-world data collection and kinematic
priors. By observing their own movements, akin to humans watching their
reflection in a mirror, robots learn an ability to simulate themselves and
predict their spatial motion for various tasks. Our results demonstrate that
this self-learned simulation not only enables accurate motion planning but also
allows the robot to detect abnormalities and recover from damage.
Related papers
- DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - Naturalistic Robot Arm Trajectory Generation via Representation Learning [4.7682079066346565]
Integration of manipulator robots in household environments suggests a need for more predictable human-like robot motion.
One method of generating naturalistic motion trajectories is via imitation of human demonstrators.
This paper explores a self-supervised imitation learning method using an autoregressive neural network for an assistive drinking task.
arXiv Detail & Related papers (2023-09-14T09:26:03Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Full-Body Visual Self-Modeling of Robot Morphologies [29.76701883250049]
Internal computational models of physical bodies are fundamental to the ability of robots and animals alike to plan and control their actions.
Recent progress in fully data-driven self-modeling has enabled machines to learn their own forward kinematics directly from task-agnostic interaction data.
Here, we propose that instead of directly modeling forward-kinematics, a more useful form of self-modeling is one that could answer space occupancy queries.
arXiv Detail & Related papers (2021-11-11T18:58:07Z) - Robot Learning from Randomized Simulations: A Review [59.992761565399185]
Deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
State-of-the-art approaches learn in simulation where data generation is fast as well as inexpensive.
We focus on a technique named 'domain randomization' which is a method for learning from randomized simulations.
arXiv Detail & Related papers (2021-11-01T13:55:41Z) - Learning Bipedal Robot Locomotion from Human Movement [0.791553652441325]
We present a reinforcement learning based method for teaching a real world bipedal robot to perform movements directly from motion capture data.
Our method seamlessly transitions from training in a simulation environment to executing on a physical robot.
We demonstrate our method on an internally developed humanoid robot with movements ranging from a dynamic walk cycle to complex balancing and waving.
arXiv Detail & Related papers (2021-05-26T00:49:37Z) - URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents [18.869243389210492]
URoboSim is a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
arXiv Detail & Related papers (2020-12-08T14:23:24Z) - Learning Predictive Models From Observation and Interaction [137.77887825854768]
Learning predictive models from interaction with the world allows an agent, such as a robot, to learn about how the world works.
However, learning a model that captures the dynamics of complex skills represents a major challenge.
We propose a method to augment the training set with observational data of other agents, such as humans.
arXiv Detail & Related papers (2019-12-30T01:10:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.