Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints
- URL: http://arxiv.org/abs/2203.15155v1
- Date: Tue, 29 Mar 2022 00:24:10 GMT
- Title: Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints
- Authors: Xinghao Zhu, Siddarth Jain, Masayoshi Tomizuka, and Jeroen van Baar
- Abstract summary: Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
- Score: 26.118805500471066
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based tactile sensors typically utilize a deformable elastomer and a
camera mounted above to provide high-resolution image observations of contacts.
Obtaining accurate volumetric meshes for the deformed elastomer can provide
direct contact information and benefit robotic grasping and manipulation. This
paper focuses on learning to synthesize the volumetric mesh of the elastomer
based on the image imprints acquired from vision-based tactile sensors.
Synthetic image-mesh pairs and real-world images are gathered from 3D finite
element methods (FEM) and physical sensors, respectively. A graph neural
network (GNN) is introduced to learn the image-to-mesh mappings with supervised
learning. A self-supervised adaptation method and image augmentation techniques
are proposed to transfer networks from simulation to reality, from primitive
contacts to unseen contacts, and from one sensor to another. Using these
learned and adapted networks, our proposed method can accurately reconstruct
the deformation of the real-world tactile sensor elastomer in various domains,
as indicated by the quantitative and qualitative results.
Related papers
- Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - CNN-based Methods for Object Recognition with High-Resolution Tactile
Sensors [0.0]
A high-resolution tactile sensor has been attached to a robotic end-effector to identify contacted objects.
Two CNN-based approaches have been employed to classify pressure images.
arXiv Detail & Related papers (2023-05-21T09:54:12Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Intriguing Properties of Vision Transformers [114.28522466830374]
Vision transformers (ViT) have demonstrated impressive performance across various machine vision problems.
We systematically study this question via an extensive set of experiments and comparisons with a high-performing convolutional neural network (CNN)
We show effective features of ViTs are due to flexible receptive and dynamic fields possible via the self-attention mechanism.
arXiv Detail & Related papers (2021-05-21T17:59:18Z) - Sim-to-real for high-resolution optical tactile sensing: From images to
3D contact force distributions [5.939410304994348]
This article proposes a strategy to generate tactile images in simulation for a vision-based tactile sensor based on an internal camera.
The deformation of the material is simulated in a finite element environment under a diverse set of contact conditions, and spherical particles are projected to a simulated image.
Features extracted from the images are mapped to the 3D contact force distribution, with the ground truth also obtained via finite-element simulations.
arXiv Detail & Related papers (2020-12-21T12:43:33Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - 3D Shape Reconstruction from Vision and Touch [62.59044232597045]
In 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored.
We introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.
arXiv Detail & Related papers (2020-07-07T20:20:33Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z) - Learning the sense of touch in simulation: a sim-to-real strategy for
vision-based tactile sensing [1.9981375888949469]
This paper focuses on a vision-based tactile sensor, which aims to reconstruct the distribution of the three-dimensional contact forces applied on its soft surface.
A strategy is proposed to train a tailored deep neural network entirely from the simulation data.
The resulting learning architecture is directly transferable across multiple tactile sensors without further training and yields accurate predictions on real data.
arXiv Detail & Related papers (2020-03-05T14:17:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.