Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the
Workspace
- URL: http://arxiv.org/abs/2105.11283v1
- Date: Mon, 24 May 2021 14:12:38 GMT
- Title: Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the
Workspace
- Authors: Eugene Valassakis, Norman Di Palo and Edward Johns
- Abstract summary: We study the problem of zero-shot sim-to-real when the task requires both highly precise control, with sub-millimetre error tolerance, and full workspace generalisation.
Our framework involves a coarse-to-fine controller, where trajectories initially begin with classical motion planning based on pose estimation, and transition to an end-to-end controller which maps images to actions and is trained in simulation with domain randomisation.
In this way, we achieve precise control whilst also generalising the controller across the workspace and keeping the generality and robustness of vision-based, end-to-end control.
- Score: 7.906608953906891
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When training control policies for robot manipulation via deep learning,
sim-to-real transfer can help satisfy the large data requirements. In this
paper, we study the problem of zero-shot sim-to-real when the task requires
both highly precise control, with sub-millimetre error tolerance, and full
workspace generalisation. Our framework involves a coarse-to-fine controller,
where trajectories initially begin with classical motion planning based on pose
estimation, and transition to an end-to-end controller which maps images to
actions and is trained in simulation with domain randomisation. In this way, we
achieve precise control whilst also generalising the controller across the
workspace and keeping the generality and robustness of vision-based, end-to-end
control. Real-world experiments on a range of different tasks show that, by
exploiting the best of both worlds, our framework significantly outperforms
purely motion planning methods, and purely learning-based methods. Furthermore,
we answer a range of questions on best practices for precise sim-to-real
transfer, such as how different image sensor modalities and image feature
representations perform.
Related papers
- Contrastive Learning for Enhancing Robust Scene Transfer in Vision-based
Agile Flight [21.728935597793473]
This work proposes an adaptive multi-pair contrastive learning strategy for visual representation learning that enables zero-shot scene transfer and real-world deployment.
We demonstrate the performance of our approach on the task of agile, vision-based quadrotor flight.
arXiv Detail & Related papers (2023-09-18T15:25:59Z) - Robust Visual Sim-to-Real Transfer for Robotic Manipulation [79.66851068682779]
Learning visuomotor policies in simulation is much safer and cheaper than in the real world.
However, due to discrepancies between the simulated and real data, simulator-trained policies often fail when transferred to real robots.
One common approach to bridge the visual sim-to-real domain gap is domain randomization (DR)
arXiv Detail & Related papers (2023-07-28T05:47:24Z) - Task2Sim : Towards Effective Pre-training and Transfer from Synthetic
Data [74.66568380558172]
We study the transferability of pre-trained models based on synthetic data generated by graphics simulators to downstream tasks.
We introduce Task2Sim, a unified model mapping downstream task representations to optimal simulation parameters.
It learns this mapping by training to find the set of best parameters on a set of "seen" tasks.
Once trained, it can then be used to predict best simulation parameters for novel "unseen" tasks in one shot.
arXiv Detail & Related papers (2021-11-30T19:25:27Z) - Optical Tactile Sim-to-Real Policy Transfer via Real-to-Sim Tactile
Image Translation [21.82940445333913]
We present a suite of simulated environments tailored towards tactile robotics and reinforcement learning.
A data-driven approach enables translation of the current state of a real tactile sensor to corresponding simulated depth images.
This policy is implemented within a real-time control loop on a physical robot to demonstrate zero-shot sim-to-real policy transfer.
arXiv Detail & Related papers (2021-06-16T13:58:35Z) - Sim-to-real reinforcement learning applied to end-to-end vehicle control [0.0]
We study end-to-end reinforcement learning on vehicle control problems, such as lane following and collision avoidance.
Our controller policy is able to control a small-scale robot to follow the right-hand lane of a real two-lane road, while its training was solely carried out in a simulation.
arXiv Detail & Related papers (2020-12-14T12:30:47Z) - Reactive Long Horizon Task Execution via Visual Skill and Precondition
Models [59.76233967614774]
We describe an approach for sim-to-real training that can accomplish unseen robotic tasks using models learned in simulation to ground components of a simple task planner.
We show an increase in success rate from 91.6% to 98% in simulation and from 10% to 80% success rate in the real-world as compared with naive baselines.
arXiv Detail & Related papers (2020-11-17T15:24:01Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z) - Goal-Conditioned End-to-End Visuomotor Control for Versatile Skill
Primitives [89.34229413345541]
We propose a conditioning scheme which avoids pitfalls by learning the controller and its conditioning in an end-to-end manner.
Our model predicts complex action sequences based directly on a dynamic image representation of the robot motion.
We report significant improvements in task success over representative MPC and IL baselines.
arXiv Detail & Related papers (2020-03-19T15:04:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.