myGym: Modular Toolkit for Visuomotor Robotic Tasks
- URL: http://arxiv.org/abs/2012.11643v1
- Date: Mon, 21 Dec 2020 19:15:05 GMT
- Title: myGym: Modular Toolkit for Visuomotor Robotic Tasks
- Authors: Michal Vavrecka, Nikita Sokovnin, Megi Mejdrechova, Gabriela Sejnova,
Marek Otahal
- Abstract summary: myGym is a novel virtual robotic toolkit developed for reinforcement learning (RL), intrinsic motivation and imitation learning tasks trained in a 3D simulator.
The modular structure of the simulator enables users to train and validate their algorithms on a large number of scenarios with various robots, environments and tasks.
The toolkit provides pretrained visual modules for visuomotor tasks allowing rapid prototyping, and, moreover, users can customize the visual submodules and retrain with their own set of objects.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel virtual robotic toolkit myGym, developed for
reinforcement learning (RL), intrinsic motivation and imitation learning tasks
trained in a 3D simulator. The trained tasks can then be easily transferred to
real-world robotic scenarios. The modular structure of the simulator enables
users to train and validate their algorithms on a large number of scenarios
with various robots, environments and tasks. Compared to existing toolkits
(e.g. OpenAI Gym, Roboschool) which are suitable for classical RL, myGym is
also prepared for visuomotor (combining vision & movement) unsupervised tasks
that require intrinsic motivation, i.e. the robots are able to generate their
own goals. There are also collaborative scenarios intended for human-robot
interaction. The toolkit provides pretrained visual modules for visuomotor
tasks allowing rapid prototyping, and, moreover, users can customize the visual
submodules and retrain with their own set of objects. In practice, the user
selects the desired environment, robot, objects, task and type of reward as
simulation parameters, and the training, visualization and testing themselves
are handled automatically. The user can thus fully focus on development of the
neural network architecture while controlling the behaviour of the environment
using predefined parameters.
Related papers
- LLARVA: Vision-Action Instruction Tuning Enhances Robot Learning [50.99807031490589]
We introduce LLARVA, a model trained with a novel instruction tuning method to unify a range of robotic learning tasks, scenarios, and environments.
We generate 8.5M image-visual trace pairs from the Open X-Embodiment dataset in order to pre-train our model.
Experiments yield strong performance, demonstrating that LLARVA performs well compared to several contemporary baselines.
arXiv Detail & Related papers (2024-06-17T17:55:29Z) - RoboScript: Code Generation for Free-Form Manipulation Tasks across Real
and Simulation [77.41969287400977]
This paper presents textbfRobotScript, a platform for a deployable robot manipulation pipeline powered by code generation.
We also present a benchmark for a code generation benchmark for robot manipulation tasks in free-form natural language.
We demonstrate the adaptability of our code generation framework across multiple robot embodiments, including the Franka and UR5 robot arms.
arXiv Detail & Related papers (2024-02-22T15:12:00Z) - RoboGen: Towards Unleashing Infinite Data for Automated Robot Learning via Generative Simulation [68.70755196744533]
RoboGen is a generative robotic agent that automatically learns diverse robotic skills at scale via generative simulation.
Our work attempts to extract the extensive and versatile knowledge embedded in large-scale models and transfer them to the field of robotics.
arXiv Detail & Related papers (2023-11-02T17:59:21Z) - Gen2Sim: Scaling up Robot Learning in Simulation with Generative Models [17.757495961816783]
Gen2Sim is a method for scaling up robot skill learning in simulation by automating generation of 3D assets, task descriptions, task decompositions and reward functions.
Our work contributes hundreds of simulated assets, tasks and demonstrations, taking a step towards fully autonomous robotic manipulation skill acquisition in simulation.
arXiv Detail & Related papers (2023-10-27T17:55:32Z) - Robot Learning with Sensorimotor Pre-training [98.7755895548928]
We present a self-supervised sensorimotor pre-training approach for robotics.
Our model, called RPT, is a Transformer that operates on sequences of sensorimotor tokens.
We find that sensorimotor pre-training consistently outperforms training from scratch, has favorable scaling properties, and enables transfer across different tasks, environments, and robots.
arXiv Detail & Related papers (2023-06-16T17:58:10Z) - Self-Improving Robots: End-to-End Autonomous Visuomotor Reinforcement
Learning [54.636562516974884]
In imitation and reinforcement learning, the cost of human supervision limits the amount of data that robots can be trained on.
In this work, we propose MEDAL++, a novel design for self-improving robotic systems.
The robot autonomously practices the task by learning to both do and undo the task, simultaneously inferring the reward function from the demonstrations.
arXiv Detail & Related papers (2023-03-02T18:51:38Z) - PACT: Perception-Action Causal Transformer for Autoregressive Robotics
Pre-Training [25.50131893785007]
This work introduces a paradigm for pre-training a general purpose representation that can serve as a starting point for multiple tasks on a given robot.
We present the Perception-Action Causal Transformer (PACT), a generative transformer-based architecture that aims to build representations directly from robot data in a self-supervised fashion.
We show that finetuning small task-specific networks on top of the larger pretrained model results in significantly better performance compared to training a single model from scratch for all tasks simultaneously.
arXiv Detail & Related papers (2022-09-22T16:20:17Z) - Learning and Executing Re-usable Behaviour Trees from Natural Language
Instruction [1.4824891788575418]
Behaviour trees can be used in conjunction with natural language instruction to provide a robust and modular control architecture.
We show how behaviour trees generated using our approach can be generalised to novel scenarios.
We validate this work against an existing corpus of natural language instructions.
arXiv Detail & Related papers (2021-06-03T07:47:06Z) - TANGO: Commonsense Generalization in Predicting Tool Interactions for
Mobile Manipulators [15.61285199988595]
We introduce TANGO, a novel neural model for predicting task-specific tool interactions.
TANGO encodes the world state comprising of objects and symbolic relationships between them using a graph neural network.
We show that by augmenting the representation of the environment with pre-trained embeddings derived from a knowledge-base, the model can generalize effectively to novel environments.
arXiv Detail & Related papers (2021-05-05T18:11:57Z) - SAPIEN: A SimulAted Part-based Interactive ENvironment [77.4739790629284]
SAPIEN is a realistic and physics-rich simulated environment that hosts a large-scale set for articulated objects.
We evaluate state-of-the-art vision algorithms for part detection and motion attribute recognition as well as demonstrate robotic interaction tasks.
arXiv Detail & Related papers (2020-03-19T00:11:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.