DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction
- URL: http://arxiv.org/abs/2201.01367v1
- Date: Tue, 4 Jan 2022 22:26:14 GMT
- Title: DenseTact: Optical Tactile Sensor for Dense Shape Reconstruction
- Authors: Won Kyung Do and Monroe Kennedy III
- Abstract summary: Vision-based tactile sensors have been widely used as rich tactile feedback has been correlated with increased performance in manipulation tasks.
Existing tactile sensor solutions with high resolution have limitations that include low accuracy, expensive components, or lack of scalability.
This paper proposes an inexpensive, scalable, and compact tactile sensor with high-resolution surface deformation modeling for surface reconstruction of the 3D sensor surface.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Increasing the performance of tactile sensing in robots enables versatile,
in-hand manipulation. Vision-based tactile sensors have been widely used as
rich tactile feedback has been shown to be correlated with increased
performance in manipulation tasks. Existing tactile sensor solutions with high
resolution have limitations that include low accuracy, expensive components, or
lack of scalability. In this paper, an inexpensive, scalable, and compact
tactile sensor with high-resolution surface deformation modeling for surface
reconstruction of the 3D sensor surface is proposed. By measuring the image
from the fisheye camera, it is shown that the sensor can successfully estimate
the surface deformation in real-time (1.8ms) by using deep convolutional neural
networks. This sensor in its design and sensing abilities represents a
significant step toward better object in-hand localization, classification, and
surface estimation all enabled by high-resolution shape reconstruction.
Related papers
- TouchSDF: A DeepSDF Approach for 3D Shape Reconstruction using
Vision-Based Tactile Sensing [29.691786688595762]
Humans rely on their visual and tactile senses to develop a comprehensive 3D understanding of their physical environment.
We propose TouchSDF, a Deep Learning approach for tactile 3D shape reconstruction.
Our technique consists of two components: (1) a Convolutional Neural Network that maps tactile images into local meshes representing the surface at the touch location, and (2) an implicit neural function that predicts a signed distance function to extract the desired 3D shape.
arXiv Detail & Related papers (2023-11-21T13:43:06Z) - On the Importance of Accurate Geometry Data for Dense 3D Vision Tasks [61.74608497496841]
Training on inaccurate or corrupt data induces model bias and hampers generalisation capabilities.
This paper investigates the effect of sensor errors for the dense 3D vision tasks of depth estimation and reconstruction.
arXiv Detail & Related papers (2023-03-26T22:32:44Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - AGO-Net: Association-Guided 3D Point Cloud Object Detection Network [86.10213302724085]
We propose a novel 3D detection framework that associates intact features for objects via domain adaptation.
We achieve new state-of-the-art performance on the KITTI 3D detection benchmark in both accuracy and speed.
arXiv Detail & Related papers (2022-08-24T16:54:38Z) - Learning to Synthesize Volumetric Meshes from Vision-based Tactile
Imprints [26.118805500471066]
Vision-based tactile sensors typically utilize a deformable elastomer and a camera mounted above to provide high-resolution image observations of contacts.
This paper focuses on learning to synthesize the mesh of the elastomer based on the image imprints acquired from vision-based tactile sensors.
A graph neural network (GNN) is introduced to learn the image-to-mesh mappings with supervised learning.
arXiv Detail & Related papers (2022-03-29T00:24:10Z) - A soft thumb-sized vision-based sensor with accurate all-round force
perception [19.905154050561013]
Vision-based haptic sensors have emerged as a promising approach to robotic touch due to affordable high-resolution cameras and successful computer-vision techniques.
We present a robust, soft, low-cost, vision-based, thumb-sized 3D haptic sensor named Insight.
arXiv Detail & Related papers (2021-11-10T20:46:23Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z) - GelSight Wedge: Measuring High-Resolution 3D Contact Geometry with a
Compact Robot Finger [8.047951969722794]
GelSight Wedge sensor is optimized to have a compact shape for robot fingers, while achieving high-resolution 3D reconstruction.
We show the effectiveness and potential of the reconstructed 3D geometry for pose tracking in the 3D space.
arXiv Detail & Related papers (2021-06-16T15:15:29Z) - Monocular Depth Estimation for Soft Visuotactile Sensors [24.319343057803973]
We investigate the application of state-of-the-art monocular depth estimation to infer dense internal (tactile) depth maps directly from an internal single small IR imaging sensor.
We show that deep networks typically used for long-range depth estimation (1-100m) can be effectively trained for precise predictions at a much shorter range (1-100mm) inside a mostly textureless deformable fluid-filled sensor.
We propose a simple supervised learning process to train an object-agnostic network requiring less than 10 random poses in contact for less than 10 seconds for a small set of diverse objects.
arXiv Detail & Related papers (2021-01-05T17:51:11Z) - OmniTact: A Multi-Directional High Resolution Touch Sensor [109.28703530853542]
Existing tactile sensors are either flat, have small sensitive fields or only provide low-resolution signals.
We introduce OmniTact, a multi-directional high-resolution tactile sensor.
We evaluate the capabilities of OmniTact on a challenging robotic control task.
arXiv Detail & Related papers (2020-03-16T01:31:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.