Optical Tactile Sim-to-Real Policy Transfer via Real-to-Sim Tactile
Image Translation
- URL: http://arxiv.org/abs/2106.08796v1
- Date: Wed, 16 Jun 2021 13:58:35 GMT
- Title: Optical Tactile Sim-to-Real Policy Transfer via Real-to-Sim Tactile
Image Translation
- Authors: Alex Church, John Lloyd, Raia Hadsell and Nathan F. Lepora
- Abstract summary: We present a suite of simulated environments tailored towards tactile robotics and reinforcement learning.
A data-driven approach enables translation of the current state of a real tactile sensor to corresponding simulated depth images.
This policy is implemented within a real-time control loop on a physical robot to demonstrate zero-shot sim-to-real policy transfer.
- Score: 21.82940445333913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulation has recently become key for deep reinforcement learning to safely
and efficiently acquire general and complex control policies from visual and
proprioceptive inputs. Tactile information is not usually considered despite
its direct relation to environment interaction. In this work, we present a
suite of simulated environments tailored towards tactile robotics and
reinforcement learning. A simple and fast method of simulating optical tactile
sensors is provided, where high-resolution contact geometry is represented as
depth images. Proximal Policy Optimisation (PPO) is used to learn successful
policies across all considered tasks. A data-driven approach enables
translation of the current state of a real tactile sensor to corresponding
simulated depth images. This policy is implemented within a real-time control
loop on a physical robot to demonstrate zero-shot sim-to-real policy transfer
on several physically-interactive tasks requiring a sense of touch.
Related papers
- Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins [17.412763585521688]
We present the Visuo-Skin (ViSk) framework, a simple approach that uses a transformer-based policy and treats skin sensor data as additional tokens alongside visual information.
ViSk significantly outperforms both vision-only and optical tactile sensing based policies.
Further analysis reveals that combining tactile and visual modalities enhances policy performance and spatial generalization, achieving an average improvement of 27.5% across tasks.
arXiv Detail & Related papers (2024-10-22T17:59:49Z) - Evaluating Real-World Robot Manipulation Policies in Simulation [91.55267186958892]
Control and visual disparities between real and simulated environments are key challenges for reliable simulated evaluation.
We propose approaches for mitigating these gaps without needing to craft full-fidelity digital twins of real-world environments.
We create SIMPLER, a collection of simulated environments for manipulation policy evaluation on common real robot setups.
arXiv Detail & Related papers (2024-05-09T17:30:16Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Coarse-to-Fine for Sim-to-Real: Sub-Millimetre Precision Across the
Workspace [7.906608953906891]
We study the problem of zero-shot sim-to-real when the task requires both highly precise control, with sub-millimetre error tolerance, and full workspace generalisation.
Our framework involves a coarse-to-fine controller, where trajectories initially begin with classical motion planning based on pose estimation, and transition to an end-to-end controller which maps images to actions and is trained in simulation with domain randomisation.
In this way, we achieve precise control whilst also generalising the controller across the workspace and keeping the generality and robustness of vision-based, end-to-end control.
arXiv Detail & Related papers (2021-05-24T14:12:38Z) - Sim-to-real for high-resolution optical tactile sensing: From images to
3D contact force distributions [5.939410304994348]
This article proposes a strategy to generate tactile images in simulation for a vision-based tactile sensor based on an internal camera.
The deformation of the material is simulated in a finite element environment under a diverse set of contact conditions, and spherical particles are projected to a simulated image.
Features extracted from the images are mapped to the 3D contact force distribution, with the ground truth also obtained via finite-element simulations.
arXiv Detail & Related papers (2020-12-21T12:43:33Z) - COCOI: Contact-aware Online Context Inference for Generalizable
Non-planar Pushing [87.7257446869134]
General contact-rich manipulation problems are long-standing challenges in robotics.
Deep reinforcement learning has shown great potential in solving robot manipulation tasks.
We propose COCOI, a deep RL method that encodes a context embedding of dynamics properties online.
arXiv Detail & Related papers (2020-11-23T08:20:21Z) - Point Cloud Based Reinforcement Learning for Sim-to-Real and Partial
Observability in Visual Navigation [62.22058066456076]
Reinforcement Learning (RL) represents powerful tools to solve complex robotic tasks.
RL does not work directly in the real-world, which is known as the sim-to-real transfer problem.
We propose a method that learns on an observation space constructed by point clouds and environment randomization.
arXiv Detail & Related papers (2020-07-27T17:46:59Z) - Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing [1.9981375888949469]
This paper focuses on a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface.
A strategy is proposed to train a tailored deep neural network entirely from the simulation data.
The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data.
arXiv Detail & Related papers (2020-03-05T14:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.