URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents
- URL: http://arxiv.org/abs/2012.04442v1
- Date: Tue, 8 Dec 2020 14:23:24 GMT
- Title: URoboSim -- An Episodic Simulation Framework for Prospective Reasoning
in Robotic Agents
- Authors: Michael Neumann, Sebastian Koralewski and Michael Beetz
- Abstract summary: URoboSim is a robot simulator that allows robots to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating data for machine learning and the usage as belief state for a real robot.
- Score: 18.869243389210492
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anticipating what might happen as a result of an action is an essential
ability humans have in order to perform tasks effectively. On the other hand,
robots capabilities in this regard are quite lacking. While machine learning is
used to increase the ability of prospection it is still limiting for novel
situations. A possibility to improve the prospection ability of robots is
through simulation of imagined motions and the physical results of these
actions. Therefore, we present URoboSim, a robot simulator that allows robots
to perform tasks as mental simulation before performing this task in reality.
We show the capabilities of URoboSim in form of mental simulations, generating
data for machine learning and the usage as belief state for a real robot.
Related papers
- Physical Simulation for Multi-agent Multi-machine Tending [11.017120167486448]
Reinforcement learning (RL) offers a promising solution where robots can learn through interaction with the environment.
We leveraged a simplistic robotic system to work with RL with "real" data without having to deploy large expensive robots in a manufacturing setting.
arXiv Detail & Related papers (2024-10-11T17:57:44Z) - DrEureka: Language Model Guided Sim-To-Real Transfer [64.14314476811806]
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale.
In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design.
Our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball.
arXiv Detail & Related papers (2024-06-04T04:53:05Z) - DiffGen: Robot Demonstration Generation via Differentiable Physics Simulation, Differentiable Rendering, and Vision-Language Model [72.66465487508556]
DiffGen is a novel framework that integrates differentiable physics simulation, differentiable rendering, and a vision-language model.
It can generate realistic robot demonstrations by minimizing the distance between the embedding of the language instruction and the embedding of the simulated observation.
Experiments demonstrate that with DiffGen, we could efficiently and effectively generate robot data with minimal human effort or training time.
arXiv Detail & Related papers (2024-05-12T15:38:17Z) - Innate Motivation for Robot Swarms by Minimizing Surprise: From Simple Simulations to Real-World Experiments [6.21540494241516]
Large-scale mobile multi-robot systems can be beneficial over monolithic robots because of higher potential for robustness and scalability.
Developing controllers for multi-robot systems is challenging because the multitude of interactions is hard to anticipate and difficult to model.
Innate motivation tries to avoid the specific formulation of rewards and work instead with different drivers, such as curiosity.
A unique advantage of the swarm robot case is that swarm members populate the robot's environment and can trigger more active behaviors in a self-referential loop.
arXiv Detail & Related papers (2024-05-04T06:25:58Z) - Teaching Robots to Build Simulations of Themselves [7.886658271375681]
We introduce a self-supervised learning framework to enable robots model and predict their morphology, kinematics and motor control using only brief raw video data.
By observing their own movements, robots learn an ability to simulate themselves and predict their spatial motion for various tasks.
arXiv Detail & Related papers (2023-11-20T20:03:34Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Knowledge-Driven Robot Program Synthesis from Human VR Demonstrations [16.321053835017942]
We present a system for automatically generating executable robot control programs from human task demonstrations in virtual reality (VR)
We leverage common-sense knowledge and game engine-based physics to semantically interpret human VR demonstrations.
We demonstrate our approach in the context of force-sensitive fetch-and-place for a robotic shopping assistant.
arXiv Detail & Related papers (2023-06-05T09:37:53Z) - Learning Human-to-Robot Handovers from Point Clouds [63.18127198174958]
We propose the first framework to learn control policies for vision-based human-to-robot handovers.
We show significant performance gains over baselines on a simulation benchmark, sim-to-sim transfer and sim-to-real transfer.
arXiv Detail & Related papers (2023-03-30T17:58:36Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.