Object recognition for robotics from tactile time series data utilising
different neural network architectures
- URL: http://arxiv.org/abs/2109.04573v1
- Date: Thu, 9 Sep 2021 22:05:45 GMT
- Title: Object recognition for robotics from tactile time series data utilising
different neural network architectures
- Authors: Wolfgang Bottcher, Pedro Machado, Nikesh Lama, T.M. McGinnity
- Abstract summary: This paper investigates the use of Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) neural network architectures for object classification on tactile data.
We compare these methods using data from two different fingertip sensors (namely the BioTac SP and WTS-FT) in the same physical setup.
The results show that the proposed method improves the maximum accuracy from 82.4% (BioTac SP fingertips) and 90.7% (WTS-FT fingertips) with complete time-series data to about 94% for both sensor types.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Robots need to exploit high-quality information on grasped objects to
interact with the physical environment. Haptic data can therefore be used for
supplementing the visual modality. This paper investigates the use of
Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) neural
network architectures for object classification on Spatio-temporal tactile
grasping data. Furthermore, we compared these methods using data from two
different fingertip sensors (namely the BioTac SP and WTS-FT) in the same
physical setup, allowing for a realistic comparison across methods and sensors
for the same tactile object classification dataset. Additionally, we propose a
way to create more training examples from the recorded data. The results show
that the proposed method improves the maximum accuracy from 82.4% (BioTac SP
fingertips) and 90.7% (WTS-FT fingertips) with complete time-series data to
about 94% for both sensor types.
Related papers
- Neuromorphic Wireless Split Computing with Multi-Level Spikes [69.73249913506042]
In neuromorphic computing, spiking neural networks (SNNs) perform inference tasks, offering significant efficiency gains for workloads involving sequential data.
Recent advances in hardware and software have demonstrated that embedding a few bits of payload in each spike exchanged between the spiking neurons can further enhance inference accuracy.
This paper investigates a wireless neuromorphic split computing architecture employing multi-level SNNs.
arXiv Detail & Related papers (2024-11-07T14:08:35Z) - Bayesian and Neural Inference on LSTM-based Object Recognition from
Tactile and Kinesthetic Information [0.0]
Haptic perception encompasses the sensing modalities encountered in the sense of touch (e.g., tactile and kinesthetic sensations)
This letter focuses on multimodal object recognition and proposes analytical and data-driven methodologies to fuse tactile- and kinesthetic-based classification results.
arXiv Detail & Related papers (2023-06-10T12:29:23Z) - Online Recognition of Incomplete Gesture Data to Interface Collaborative
Robots [0.0]
This paper introduces an HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors.
The recognized gestures are used to teleoperate a robot in a collaborative process that consists of preparing a breakfast meal.
arXiv Detail & Related papers (2023-04-13T18:49:08Z) - Collaborative Learning with a Drone Orchestrator [79.75113006257872]
A swarm of intelligent wireless devices train a shared neural network model with the help of a drone.
The proposed framework achieves a significant speedup in training, leading to an average 24% and 87% saving in the drone hovering time.
arXiv Detail & Related papers (2023-03-03T23:46:25Z) - BeCAPTCHA-Type: Biometric Keystroke Data Generation for Improved Bot
Detection [63.447493500066045]
This work proposes a data driven learning model for the synthesis of keystroke biometric data.
The proposed method is compared with two statistical approaches based on Universal and User-dependent models.
Our experimental framework considers a dataset with 136 million keystroke events from 168 thousand subjects.
arXiv Detail & Related papers (2022-07-27T09:26:15Z) - A Novel Approach For Analysis of Distributed Acoustic Sensing System
Based on Deep Transfer Learning [0.0]
Convolutional neural networks are highly capable tools for extracting spatial information.
Long-short term memory (LSTM) is an effective instrument for processing sequential data.
VGG-16 architecture in our framework manages to obtain 100% classification accuracy in 50 trainings.
arXiv Detail & Related papers (2022-06-24T19:56:01Z) - Human-in-the-Loop Disinformation Detection: Stance, Sentiment, or
Something Else? [93.91375268580806]
Both politics and pandemics have recently provided ample motivation for the development of machine learning-enabled disinformation (a.k.a. fake news) detection algorithms.
Existing literature has focused primarily on the fully-automated case, but the resulting techniques cannot reliably detect disinformation on the varied topics, sources, and time scales required for military applications.
By leveraging an already-available analyst as a human-in-the-loop, canonical machine learning techniques of sentiment analysis, aspect-based sentiment analysis, and stance detection become plausible methods to use for a partially-automated disinformation detection system.
arXiv Detail & Related papers (2021-11-09T13:30:34Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - TactileSGNet: A Spiking Graph Neural Network for Event-based Tactile
Object Recognition [17.37142241982902]
New advances in flexible, event-driven, electronic skins may soon endow robots with touch perception capabilities similar to humans.
These unique features may render current deep learning approaches such as convolutional feature extractors unsuitable for tactile learning.
We propose a novel spiking graph neural network for event-based tactile object recognition.
arXiv Detail & Related papers (2020-08-01T03:35:15Z) - ST-MNIST -- The Spiking Tactile MNIST Neuromorphic Dataset [13.270250399169104]
We debut a novel neuromorphic Spiking Tactile MNIST dataset, which comprises handwritten digits obtained by human participants writing on a tactile neuromorphic sensor array.
We also describe an initial effort to evaluate our ST-MNIST dataset using existing artificial spiking and neural network models.
arXiv Detail & Related papers (2020-05-08T23:44:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.