TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution
Vision-based Tactile Sensors
- URL: http://arxiv.org/abs/2012.08456v1
- Date: Tue, 15 Dec 2020 17:54:07 GMT
- Title: TACTO: A Fast, Flexible and Open-source Simulator for High-Resolution
Vision-based Tactile Sensors
- Authors: Shaoxiong Wang, Mike Lambeta, Po-Wei Chou, Roberto Calandra
- Abstract summary: TACTO is a fast, flexible and open-source simulator for vision-based tactile sensors.
It can render realistic high-resolution touch readings at hundreds of frames per second.
We demonstrate TACTO on a perceptual task, by learning to predict grasp stability using touch from 1 million grasps.
- Score: 8.497185333795477
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Simulators perform an important role in prototyping, debugging and
benchmarking new advances in robotics and learning for control. Although many
physics engines exist, some aspects of the real-world are harder than others to
simulate. One of the aspects that have so far eluded accurate simulation is
touch sensing. To address this gap, we present TACTO -- a fast, flexible and
open-source simulator for vision-based tactile sensors. This simulator allows
to render realistic high-resolution touch readings at hundreds of frames per
second, and can be easily configured to simulate different vision-based tactile
sensors, including GelSight, DIGIT and OmniTact. In this paper, we detail the
principles that drove the implementation of TACTO and how they are reflected in
its architecture. We demonstrate TACTO on a perceptual task, by learning to
predict grasp stability using touch from 1 million grasps, and on a marble
manipulation control task. We believe that TACTO is a step towards the
widespread adoption of touch sensing in robotic applications, and to enable
machine learning practitioners interested in multi-modal learning and control.
TACTO is open-source at https://github.com/facebookresearch/tacto.
Related papers
- Digitizing Touch with an Artificial Multimodal Fingertip [51.7029315337739]
Humans and robots both benefit from using touch to perceive and interact with the surrounding environment.
Here, we describe several conceptual and technological innovations to improve the digitization of touch.
These advances are embodied in an artificial finger-shaped sensor with advanced sensing capabilities.
arXiv Detail & Related papers (2024-11-04T18:38:50Z) - Learning In-Hand Translation Using Tactile Skin With Shear and Normal Force Sensing [43.269672740168396]
We introduce a sensor model for tactile skin that enables zero-shot sim-to-real transfer of ternary shear and binary normal forces.
We conduct extensive real-world experiments to assess how tactile sensing facilitates policy adaptation to various unseen object properties.
arXiv Detail & Related papers (2024-07-10T17:52:30Z) - VR-LENS: Super Learning-based Cybersickness Detection and Explainable
AI-Guided Deployment in Virtual Reality [1.9642496463491053]
This work presents an explainable artificial intelligence (XAI)-based framework VR-LENS for developing cybersickness detection ML models.
We first develop a novel super learning-based ensemble ML model for cybersickness detection.
Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets.
arXiv Detail & Related papers (2023-02-03T20:15:51Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - VISTA 2.0: An Open, Data-driven Simulator for Multimodal Sensing and
Policy Learning for Autonomous Vehicles [131.2240621036954]
We present VISTA, an open source, data-driven simulator that integrates multiple types of sensors for autonomous vehicles.
Using high fidelity, real-world datasets, VISTA represents and simulates RGB cameras, 3D LiDAR, and event-based cameras.
We demonstrate the ability to train and test perception-to-control policies across each of the sensor types and showcase the power of this approach via deployment on a full scale autonomous vehicle.
arXiv Detail & Related papers (2021-11-23T18:58:10Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - A Differentiable Recipe for Learning Visual Non-Prehensile Planar
Manipulation [63.1610540170754]
We focus on the problem of visual non-prehensile planar manipulation.
We propose a novel architecture that combines video decoding neural models with priors from contact mechanics.
We find that our modular and fully differentiable architecture performs better than learning-only methods on unseen objects and motions.
arXiv Detail & Related papers (2021-11-09T18:39:45Z) - Learning to Fly -- a Gym Environment with PyBullet Physics for
Reinforcement Learning of Multi-agent Quadcopter Control [0.0]
We propose an open-source environment for multiple quadcopters based on the Bullet physics engine.
Its multi-agent and vision based reinforcement learning interfaces, as well as the support of realistic collisions and aerodynamic effects, make it, to the best of our knowledge, a first of its kind.
arXiv Detail & Related papers (2021-03-03T02:47:59Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.