Semi-Supervised Disentanglement of Tactile Contact~Geometry from
Sliding-Induced Shear
- URL: http://arxiv.org/abs/2208.12500v1
- Date: Fri, 26 Aug 2022 08:30:19 GMT
- Title: Semi-Supervised Disentanglement of Tactile Contact~Geometry from
Sliding-Induced Shear
- Authors: Anupam K. Gupta, Alex Church, Nathan F. Lepora
- Abstract summary: The sense of touch is fundamental to human dexterity.
When mimicked in robotic touch, particularly by use of soft optical tactile sensors, it suffers from distortion due to motion-dependent shear.
In this work, we pursue a semi-supervised approach to remove shear while preserving contact-only information.
- Score: 12.004939546183355
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The sense of touch is fundamental to human dexterity. When mimicked in
robotic touch, particularly by use of soft optical tactile sensors, it suffers
from distortion due to motion-dependent shear. This complicates tactile tasks
like shape reconstruction and exploration that require information about
contact geometry. In this work, we pursue a semi-supervised approach to remove
shear while preserving contact-only information. We validate our approach by
showing a match between the model-generated unsheared images with their
counterparts from vertically tapping onto the object. The model-generated
unsheared images give faithful reconstruction of contact-geometry otherwise
masked by shear, along with robust estimation of object pose then used for
sliding exploration and full reconstruction of several planar shapes. We show
that our semi-supervised approach achieves comparable performance to its fully
supervised counterpart across all validation tasks with an order of magnitude
less supervision. The semi-supervised method is thus more computational and
labeled sample-efficient. We expect it will have broad applicability to wide
range of complex tactile exploration and manipulation tasks performed via a
shear-sensitive sense of touch.
Related papers
- GEARS: Local Geometry-aware Hand-object Interaction Synthesis [38.75942505771009]
We introduce a novel joint-centered sensor designed to reason about local object geometry near potential interaction regions.
As an important step towards mitigating the learning complexity, we transform the points from global frame to template hand frame and use a shared module to process sensor features of each individual joint.
This is followed by a perceptual-temporal transformer network aimed at capturing correlation among the joints in different dimensions.
arXiv Detail & Related papers (2024-04-02T09:18:52Z) - Snap-it, Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for Reconstructing Challenging Surfaces [34.831730064258494]
We propose Tactile-Informed 3DGS, a novel approach that incorporates touch data (local depth maps) with multi-view vision data to achieve surface reconstruction and novel view synthesis.
By creating a framework that decreases the transmittance at touch locations, we achieve a refined surface reconstruction, ensuring a uniformly smooth depth map.
We conduct evaluation on objects with glossy and reflective surfaces and demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2024-03-29T16:30:17Z) - Learning Explicit Contact for Implicit Reconstruction of Hand-held
Objects from Monocular Images [59.49985837246644]
We show how to model contacts in an explicit way to benefit the implicit reconstruction of hand-held objects.
In the first part, we propose a new subtask of directly estimating 3D hand-object contacts from a single image.
In the second part, we introduce a novel method to diffuse estimated contact states from the hand mesh surface to nearby 3D space.
arXiv Detail & Related papers (2023-05-31T17:59:26Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Planning Visual-Tactile Precision Grasps via Complementary Use of Vision
and Touch [9.31776719215139]
We propose an approach to grasp planning that explicitly reasons about where the fingertips should contact the estimated object surface.
Key to our method's success is the use of visual surface estimation for initial planning to encode the contact constraint.
We show that our method successfully synthesises and executes precision grasps for previously unseen objects using surface estimates from a single camera view.
arXiv Detail & Related papers (2022-12-16T17:32:56Z) - Learning to Detect Slip with Barometric Tactile Sensors and a Temporal
Convolutional Neural Network [7.346580429118843]
We present a learning-based method to detect slip using barometric tactile sensors.
We train a temporal convolution neural network to detect slip, achieving high detection accuracies.
We argue that barometric tactile sensing technology, combined with data-driven learning, is suitable for many manipulation tasks such as slip compensation.
arXiv Detail & Related papers (2022-02-19T08:21:56Z) - Contact-Aware Retargeting of Skinned Motion [49.71236739408685]
This paper introduces a motion estimation method that preserves self-contacts and prevents interpenetration.
The method identifies self-contacts and ground contacts in the input motion, and optimize the motion to apply to the output skeleton.
In experiments, our results quantitatively outperform previous methods and we conduct a user study where our retargeted motions are rated as higher-quality than those produced by recent works.
arXiv Detail & Related papers (2021-09-15T17:05:02Z) - Tactile Image-to-Image Disentanglement of Contact Geometry from
Motion-Induced Shear [30.404840177562754]
Robotic touch, particularly when using soft optical tactile sensors, suffers from distortion caused by motion-dependent shear.
We propose a supervised convolutional deep neural network model that learns to disentangle, in the latent space, the components of sensor deformations caused by contact geometry from those due to sliding-induced shear.
arXiv Detail & Related papers (2021-09-08T13:03:08Z) - Elastic Tactile Simulation Towards Tactile-Visual Perception [58.44106915440858]
We propose Elastic Interaction of Particles (EIP) for tactile simulation.
EIP models the tactile sensor as a group of coordinated particles, and the elastic property is applied to regulate the deformation of particles during contact.
We further propose a tactile-visual perception network that enables information fusion between tactile data and visual images.
arXiv Detail & Related papers (2021-08-11T03:49:59Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z) - 3D Shape Reconstruction from Vision and Touch [62.59044232597045]
In 3D shape reconstruction, the complementary fusion of visual and haptic modalities remains largely unexplored.
We introduce a dataset of simulated touch and vision signals from the interaction between a robotic hand and a large array of 3D objects.
arXiv Detail & Related papers (2020-07-07T20:20:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.