Sim-to-real for high-resolution optical tactile sensing: From images to
3D contact force distributions
- URL: http://arxiv.org/abs/2012.11295v2
- Date: Thu, 31 Dec 2020 11:45:11 GMT
- Title: Sim-to-real for high-resolution optical tactile sensing: From images to
3D contact force distributions
- Authors: Carmelo Sferrazza and Raffaello D'Andrea
- Abstract summary: This article proposes a strategy to generate tactile images in simulation for a vision-based tactile sensor based on an internal camera.
The deformation of the material is simulated in a finite element environment under a diverse set of contact conditions, and spherical particles are projected to a simulated image.
Features extracted from the images are mapped to the 3D contact force distribution, with the ground truth also obtained via finite-element simulations.
- Score: 5.939410304994348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The images captured by vision-based tactile sensors carry information about
high-resolution tactile fields, such as the distribution of the contact forces
applied to their soft sensing surface. However, extracting the information
encoded in the images is challenging and often addressed with learning-based
approaches, which generally require a large amount of training data. This
article proposes a strategy to generate tactile images in simulation for a
vision-based tactile sensor based on an internal camera that tracks the motion
of spherical particles within a soft material. The deformation of the material
is simulated in a finite element environment under a diverse set of contact
conditions, and spherical particles are projected to a simulated image.
Features extracted from the images are mapped to the 3D contact force
distribution, with the ground truth also obtained via finite-element
simulations, with an artificial neural network that is therefore entirely
trained on synthetic data avoiding the need for real-world data collection. The
resulting model exhibits high accuracy when evaluated on real-world tactile
images, is transferable across multiple tactile sensors without further
training, and is suitable for efficient real-time inference.
Related papers
- Augmented Reality based Simulated Data (ARSim) with multi-view consistency for AV perception networks [47.07188762367792]
We present ARSim, a framework designed to enhance real multi-view image data with 3D synthetic objects of interest.
We construct a simplified virtual scene using real data and strategically place 3D synthetic assets within it.
The resulting augmented multi-view consistent dataset is used to train a multi-camera perception network for autonomous vehicles.
arXiv Detail & Related papers (2024-03-22T17:49:11Z) - Sim2Real Bilevel Adaptation for Object Surface Classification using Vision-Based Tactile Sensors [14.835051543002164]
We train a Diffusion Model to bridge the Sim2Real gap in the field of vision-based tactile sensors for classifying object surfaces.
We employ a simulator to generate images by uniformly sampling the surface of objects from the YCB Model Set.
These simulated images are then translated into the real domain using the Diffusion Model and automatically labeled to train a classifier.
arXiv Detail & Related papers (2023-11-02T16:37:27Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints [26.118805500471066]
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
arXiv Detail & Related papers (2022-03-29T00:24:10Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Optical Tactile Sim-to-Real Policy Transfer via Real-to-Sim Tactile
Image Translation [21.82940445333913]
We present a suite of simulated environments tailored towards tactile robotics and reinforcement learning.
A data-driven approach enables translation of the current state of a real tactile sensor to corresponding simulated depth images.
This policy is implemented within a real-time control loop on a physical robot to demonstrate zero-shot sim-to-real policy transfer.
arXiv Detail & Related papers (2021-06-16T13:58:35Z) - Learning optical flow from still images [53.295332513139925]
We introduce a framework to generate accurate ground-truth optical flow annotations quickly and in large amounts from any readily available single real picture.
We virtually move the camera in the reconstructed environment with known motion vectors and rotation angles.
When trained with our data, state-of-the-art optical flow networks achieve superior generalization to unseen real data.
arXiv Detail & Related papers (2021-04-08T17:59:58Z) - gradSim: Differentiable simulation for system identification and
visuomotor control [66.37288629125996]
We present gradSim, a framework that overcomes the dependence on 3D supervision by leveraging differentiable multiphysics simulation and differentiable rendering.
Our unified graph enables learning in challenging visuomotor control tasks, without relying on state-based (3D) supervision.
arXiv Detail & Related papers (2021-04-06T16:32:01Z) - Intrinsic Autoencoders for Joint Neural Rendering and Intrinsic Image
Decomposition [67.9464567157846]
We propose an autoencoder for joint generation of realistic images from synthetic 3D models while simultaneously decomposing real images into their intrinsic shape and appearance properties.
Our experiments confirm that a joint treatment of rendering and decomposition is indeed beneficial and that our approach outperforms state-of-the-art image-to-image translation baselines both qualitatively and quantitatively.
arXiv Detail & Related papers (2020-06-29T12:53:58Z) - Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing [1.9981375888949469]
This paper focuses on a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface.
A strategy is proposed to train a tailored deep neural network entirely from the simulation data.
The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data.
arXiv Detail & Related papers (2020-03-05T14:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.