Tactile-ViewGCN: Learning Shape Descriptor from Tactile Data using Graph
Convolutional Network
- URL: http://arxiv.org/abs/2203.06183v1
- Date: Sat, 12 Mar 2022 05:58:21 GMT
- Title: Tactile-ViewGCN: Learning Shape Descriptor from Tactile Data using Graph
Convolutional Network
- Authors: Sachidanand V S and Mansi Sharma
- Abstract summary: It focuses on improving previous works on object classification using tactile data.
We propose a novel method, dubbed as Tactile-ViewGCN, that hierarchically aggregate tactile features.
Our model outperforms previous methods on the STAG dataset with an accuracy of 81.82%.
- Score: 0.4189643331553922
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For humans, our "senses of touch" have always been necessary for our ability
to precisely and efficiently manipulate objects of all shapes in any
environment, but until recently, not many works have been done to fully
understand haptic feedback. This work proposed a novel method for getting a
better shape descriptor than existing methods for classifying an object from
multiple tactile data collected from a tactile glove. It focuses on improving
previous works on object classification using tactile data. The major problem
for object classification from multiple tactile data is to find a good way to
aggregate features extracted from multiple tactile images. We propose a novel
method, dubbed as Tactile-ViewGCN, that hierarchically aggregate tactile
features considering relations among different features by using Graph
Convolutional Network. Our model outperforms previous methods on the STAG
dataset with an accuracy of 81.82%.
Related papers
- TextToucher: Fine-Grained Text-to-Touch Generation [20.49021594738016]
We analyze the characteristics of tactile images in detail from two granularities: object-level (tactile texture, tactile shape), and sensor-level (gel status)
We propose a fine-grained Text-to-Touch generation method (TextToucher) to generate high-quality tactile samples.
arXiv Detail & Related papers (2024-09-09T08:26:47Z) - PseudoTouch: Efficiently Imaging the Surface Feel of Objects for Robotic Manipulation [8.997347199266592]
Our goal is to equip robots with a similar capability, which we term ourmodel.
We frame this problem as the task of learning a low-dimensional visual-tactile embedding.
Using ReSkin, we collect and train PseudoTouch on a dataset comprising aligned tactile and visual data pairs.
We demonstrate the efficacy of PseudoTouch through its application to two downstream tasks: object recognition and grasp stability prediction.
arXiv Detail & Related papers (2024-03-22T10:51:31Z) - Visual-Tactile Sensing for In-Hand Object Reconstruction [38.42487660352112]
We propose a visual-tactile in-hand object reconstruction framework textbfVTacO, and extend it to textbfVTacOH for hand-object reconstruction.
A simulation environment, VT-Sim, supports generating hand-object interaction for both rigid and deformable objects.
arXiv Detail & Related papers (2023-03-25T15:16:31Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - VisTaNet: Attention Guided Deep Fusion for Surface Roughness
Classification [0.0]
This paper presents a visual dataset that augments an existing tactile dataset.
We propose a novel deep fusion architecture that fuses visual and tactile data using four types of fusion strategies.
Our model shows significant performance improvements (97.22%) in surface roughness classification accuracy over tactile only.
arXiv Detail & Related papers (2022-09-18T09:37:06Z) - S$^2$Contact: Graph-based Network for 3D Hand-Object Contact Estimation
with Semi-Supervised Learning [70.72037296392642]
We propose a novel semi-supervised framework that allows us to learn contact from monocular images.
Specifically, we leverage visual and geometric consistency constraints in large-scale datasets for generating pseudo-labels.
We show benefits from using a contact map that rules hand-object interactions to produce more accurate reconstructions.
arXiv Detail & Related papers (2022-08-01T14:05:23Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Active 3D Shape Reconstruction from Vision and Touch [66.08432412497443]
Humans build 3D understandings of the world through active object exploration, using jointly their senses of vision and touch.
In 3D shape reconstruction, most recent progress has relied on static datasets of limited sensory data such as RGB images, depth maps or haptic readings.
We introduce a system composed of: 1) a haptic simulator leveraging high spatial resolution vision-based tactile sensors for active touching of 3D objects; 2) a mesh-based 3D shape reconstruction model that relies on tactile or visuotactile priors to guide the shape exploration; and 3) a set of data-driven solutions with either tactile or visuo
arXiv Detail & Related papers (2021-07-20T15:56:52Z) - Generative Partial Visual-Tactile Fused Object Clustering [81.17645983141773]
We propose a Generative Partial Visual-Tactile Fused (i.e., GPVTF) framework for object clustering.
A conditional cross-modal clustering generative adversarial network is then developed to synthesize one modality conditioning on the other modality.
To the end, two pseudo-label based KL-divergence losses are employed to update the corresponding modality-specific encoders.
arXiv Detail & Related papers (2020-12-28T02:37:03Z) - TactileSGNet: A Spiking Graph Neural Network for Event-based Tactile
Object Recognition [17.37142241982902]
New advances in flexible, event-driven, electronic skins may soon endow robots with touch perception capabilities similar to humans.
These unique features may render current deep learning approaches such as convolutional feature extractors unsuitable for tactile learning.
We propose a novel spiking graph neural network for event-based tactile object recognition.
arXiv Detail & Related papers (2020-08-01T03:35:15Z) - Segment as Points for Efficient Online Multi-Object Tracking and
Segmentation [66.03023110058464]
We propose a highly effective method for learning instance embeddings based on segments by converting the compact image representation to un-ordered 2D point cloud representation.
Our method generates a new tracking-by-points paradigm where discriminative instance embeddings are learned from randomly selected points rather than images.
The resulting online MOTS framework, named PointTrack, surpasses all the state-of-the-art methods by large margins.
arXiv Detail & Related papers (2020-07-03T08:29:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.