Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models
- URL: http://arxiv.org/abs/2011.08694v2
- Date: Wed, 14 Jul 2021 14:07:25 GMT
- Title: Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models
- Authors: Shohin Mukherjee, Chris Paxton, Arsalan Mousavian, Adam Fishman, Maxim
Likhachev, Dieter Fox
- Abstract summary: We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
- Score: 59.76233967614774
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Zero-shot execution of unseen robotic tasks is important to allowing robots
to perform a wide variety of tasks in human environments, but collecting the
amounts of data necessary to train end-to-end policies in the real-world is
often infeasible. We describe an approach for sim-to-real training that can
accomplish unseen robotic tasks using models learned in simulation to ground
components of a simple task planner. We learn a library of parameterized
skills, along with a set of predicates-based preconditions and termination
conditions, entirely in simulation. We explore a block-stacking task because it
has a clear structure, where multiple skills must be chained together, but our
methods are applicable to a wide range of other problems and domains, and can
transfer from simulation to the real-world with no fine tuning. The system is
able to recognize failures and accomplish long-horizon tasks from perceptual
input, which is critical for real-world execution. We evaluate our proposed
approach in both simulation and in the real-world, showing an increase in
success rate from 91.6% to 98% in simulation and from 10% to 80% success rate
in the real-world as compared with naive baselines. For experiment videos
including both real-world and simulation, see:
https://www.youtube.com/playlist?list=PL-oD0xHUngeLfQmpngYkGFZarstfPOXqX
Related papers
- Dynamics as Prompts: In-Context Learning for Sim-to-Real System Identifications [23.94013806312391]
We propose a novel approach that dynamically adjusts simulation environment parameters online using in-context learning.
We validate our approach across two tasks: object scooping and table air hockey.
Our approach delivers efficient and smooth system identification, advancing the deployment of robots in dynamic real-world scenarios.
arXiv Detail & Related papers (2024-10-27T07:13:38Z) - Grounded Curriculum Learning [37.95557495560936]
Existing curriculum learning techniques automatically vary the simulation task distribution without considering its relevance to the real world.
We propose Grounded Curriculum Learning (GCL), which aligns the simulated task distribution in the curriculum with the real world.
We validate GCL using the BARN dataset on complex navigation tasks, achieving a 6.8% and 6.5% higher success rate compared to a state-of-the-art CL method and a curriculum designed by human experts.
arXiv Detail & Related papers (2024-09-29T22:54:08Z) - DrEureka: Language Model Guided Sim-To-Real Transfer [64.14314476811806]
Transferring policies learned in simulation to the real world is a promising strategy for acquiring robot skills at scale.
In this paper, we investigate using Large Language Models (LLMs) to automate and accelerate sim-to-real design.
Our approach is capable of solving novel robot tasks, such as quadruped balancing and walking atop a yoga ball.
arXiv Detail & Related papers (2024-06-04T04:53:05Z) - TRANSIC: Sim-to-Real Policy Transfer by Learning from Online Correction [25.36756787147331]
Learning in simulation and transferring the learned policy to the real world has the potential to enable generalist robots.
We propose a data-driven approach to enable successful sim-to-real transfer based on a human-in-the-loop framework.
We show that our approach can achieve successful sim-to-real transfer in complex and contact-rich manipulation tasks such as furniture assembly.
arXiv Detail & Related papers (2024-05-16T17:59:07Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Practical Imitation Learning in the Real World via Task Consistency Loss [18.827979446629296]
This paper introduces a self-supervised loss that encourages sim and real alignment both at the feature and action-prediction levels.
We achieve 80% success across ten seen and unseen scenes using only 16.2 hours of teleoperated demonstrations in sim and real.
arXiv Detail & Related papers (2022-02-03T21:43:06Z) - TrafficSim: Learning to Simulate Realistic Multi-Agent Behaviors [74.67698916175614]
We propose TrafficSim, a multi-agent behavior model for realistic traffic simulation.
In particular, we leverage an implicit latent variable model to parameterize a joint actor policy.
We show TrafficSim generates significantly more realistic and diverse traffic scenarios as compared to a diverse set of baselines.
arXiv Detail & Related papers (2021-01-17T00:29:30Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.