Online Recognition of Incomplete Gesture Data to Interface Collaborative
Robots
- URL: http://arxiv.org/abs/2304.06777v1
- Date: Thu, 13 Apr 2023 18:49:08 GMT
- Title: Online Recognition of Incomplete Gesture Data to Interface Collaborative
Robots
- Authors: M. A. Sim\~ao, O. Gibaru, P. Neto
- Abstract summary: This paper introduces an HRI framework to classify large vocabularies of interwoven static gestures (SGs) and dynamic gestures (DGs) captured with wearable sensors.
The recognized gestures are used to teleoperate a robot in a collaborative process that consists of preparing a breakfast meal.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online recognition of gestures is critical for intuitive human-robot
interaction (HRI) and further push collaborative robotics into the market,
making robots accessible to more people. The problem is that it is difficult to
achieve accurate gesture recognition in real unstructured environments, often
using distorted and incomplete multisensory data. This paper introduces an HRI
framework to classify large vocabularies of interwoven static gestures (SGs)
and dynamic gestures (DGs) captured with wearable sensors. DG features are
obtained by applying data dimensionality reduction to raw data from sensors
(resampling with cubic interpolation and principal component analysis).
Experimental tests were conducted using the UC2017 hand gesture dataset with
samples from eight different subjects. The classification models show an
accuracy of 95.6% for a library of 24 SGs with a random forest and 99.3% for 10
DGs using artificial neural networks. These results compare equally or
favorably with different commonly used classifiers. Long short-term memory deep
networks achieved similar performance in online frame-by-frame classification
using raw incomplete data, performing better in terms of accuracy than static
models with specially crafted features, but worse in training and inference
time. The recognized gestures are used to teleoperate a robot in a
collaborative process that consists in preparing a breakfast meal.
Related papers
- Learning Manipulation by Predicting Interaction [85.57297574510507]
We propose a general pre-training pipeline that learns Manipulation by Predicting the Interaction.
The experimental results demonstrate that MPI exhibits remarkable improvement by 10% to 64% compared with previous state-of-the-art in real-world robot platforms.
arXiv Detail & Related papers (2024-06-01T13:28:31Z) - Hand Gesture Classification on Praxis Dataset: Trading Accuracy for
Expense [0.6390468088226495]
We focus on'skeletal' data represented by the body joint coordinates, from the Praxis dataset.
The PRAXIS dataset contains recordings of patients with cortical pathologies such as Alzheimer's disease.
Using a combination of windowing techniques with deep learning architecture such as a Recurrent Neural Network (RNN), we achieved an overall accuracy of 70.8%.
arXiv Detail & Related papers (2023-11-01T18:18:09Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Synthetic-to-Real Domain Adaptation for Action Recognition: A Dataset and Baseline Performances [76.34037366117234]
We introduce a new dataset called Robot Control Gestures (RoCoG-v2)
The dataset is composed of both real and synthetic videos from seven gesture classes.
We present results using state-of-the-art action recognition and domain adaptation algorithms.
arXiv Detail & Related papers (2023-03-17T23:23:55Z) - Towards Domain-Independent and Real-Time Gesture Recognition Using
mmWave Signal [11.76969975145963]
DI-Gesture is a domain-independent and real-time mmWave gesture recognition system.
In real-time scenario, the accuracy of DI-Gesutre reaches over 97% with average inference time of 2.87ms.
arXiv Detail & Related papers (2021-11-11T13:28:28Z) - Object recognition for robotics from tactile time series data utilising
different neural network architectures [0.0]
This paper investigates the use of Convolutional Neural Networks (CNN) and Long-Short Term Memory (LSTM) neural network architectures for object classification on tactile data.
We compare these methods using data from two different fingertip sensors (namely the BioTac SP and WTS-FT) in the same physical setup.
The results show that the proposed method improves the maximum accuracy from 82.4% (BioTac SP fingertips) and 90.7% (WTS-FT fingertips) with complete time-series data to about 94% for both sensor types.
arXiv Detail & Related papers (2021-09-09T22:05:45Z) - Kimera-Multi: Robust, Distributed, Dense Metric-Semantic SLAM for
Multi-Robot Systems [92.26462290867963]
Kimera-Multi is the first multi-robot system that is robust and capable of identifying and rejecting incorrect inter and intra-robot loop closures.
We demonstrate Kimera-Multi in photo-realistic simulations, SLAM benchmarking datasets, and challenging outdoor datasets collected using ground robots.
arXiv Detail & Related papers (2021-06-28T03:56:40Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Where is my hand? Deep hand segmentation for visual self-recognition in
humanoid robots [129.46920552019247]
We propose the use of a Convolution Neural Network (CNN) to segment the robot hand from an image in an egocentric view.
We fine-tuned the Mask-RCNN network for the specific task of segmenting the hand of the humanoid robot Vizzy.
arXiv Detail & Related papers (2021-02-09T10:34:32Z) - Human Haptic Gesture Interpretation for Robotic Systems [3.888848425698769]
Physical human-robot interactions (pHRI) are less efficient and communicative than human-human interactions.
A key reason is a lack of informative sense of touch in robotic systems.
This work presents four proposed touch gesture classes that cover the majority of the gesture characteristics identified in the literature.
arXiv Detail & Related papers (2020-12-03T14:33:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.