Agile gesture recognition for capacitive sensing devices: adapting
on-the-job
- URL: http://arxiv.org/abs/2305.07624v1
- Date: Fri, 12 May 2023 17:24:02 GMT
- Title: Agile gesture recognition for capacitive sensing devices: adapting
on-the-job
- Authors: Ying Liu, Liucheng Guo, Valeri A. Makarov, Yuxiang Huang, Alexander
Gorban, Evgeny Mirkes, Ivan Y. Tyukin
- Abstract summary: We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
- Score: 55.40855017016652
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Automated hand gesture recognition has been a focus of the AI community for
decades. Traditionally, work in this domain revolved largely around scenarios
assuming the availability of the flow of images of the user hands. This has
partly been due to the prevalence of camera-based devices and the wide
availability of image data. However, there is growing demand for gesture
recognition technology that can be implemented on low-power devices using
limited sensor data instead of high-dimensional inputs like hand images. In
this work, we demonstrate a hand gesture recognition system and method that
uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five
fingers. We use a machine learning technique to analyse the time series signals
and identify three features that can represent 5 fingers within 500 ms. The
analysis is composed of a two stage training strategy, including dimension
reduction through principal component analysis and classification with K
nearest neighbour. Remarkably, we found that this combination showed a level of
performance which was comparable to more advanced methods such as supervised
variational autoencoder. The base system can also be equipped with the
capability to learn from occasional errors by providing it with an additional
adaptive error correction mechanism. The results showed that the error
corrector improve the classification performance in the base system without
compromising its performance. The system requires no more than 1 ms of
computing time per input sample, and is smaller than deep neural networks,
demonstrating the feasibility of agile gesture recognition systems based on
this technology.
Related papers
- Learning Visuotactile Skills with Two Multifingered Hands [80.99370364907278]
We explore learning from human demonstrations using a bimanual system with multifingered hands and visuotactile data.
Our results mark a promising step forward in bimanual multifingered manipulation from visuotactile data.
arXiv Detail & Related papers (2024-04-25T17:59:41Z) - Agile gesture recognition for low-power applications: customisation for generalisation [41.728933551492275]
Automated hand gesture recognition has long been a focal point in the AI community.
There is an increasing demand for gesture recognition technologies that operate on low-power sensor devices.
In this study, we unveil a novel methodology for pattern recognition systems using adaptive and agile error correction.
arXiv Detail & Related papers (2024-03-12T19:34:18Z) - Match and Locate: low-frequency monocular odometry based on deep feature
matching [0.65268245109828]
We introduce a novel approach for the robotic odometry which only requires a single camera.
The approach is based on matching image features between the consecutive frames of the video stream using deep feature matching models.
We evaluate the performance of the approach in the AISG-SLA Visual Localisation Challenge and find that while being computationally efficient and easy to implement our method shows competitive results.
arXiv Detail & Related papers (2023-11-16T17:32:58Z) - Towards Predicting Fine Finger Motions from Ultrasound Images via
Kinematic Representation [12.49914980193329]
We study the inference problem of identifying the activation of specific fingers from a sequence of US images.
We consider this task as an important step towards higher adoption rates of robotic prostheses among arm amputees.
arXiv Detail & Related papers (2022-02-10T18:05:09Z) - Neural Network Based Lidar Gesture Recognition for Realtime Robot
Teleoperation [0.0]
We propose a novel low-complexity lidar gesture recognition system for mobile robot control.
The system is lightweight and suitable for mobile robot control with limited computing power.
The use of lidar contributes to the robustness of the system, allowing it to operate in most outdoor conditions.
arXiv Detail & Related papers (2021-09-17T00:49:31Z) - Gesture Similarity Analysis on Event Data Using a Hybrid Guided
Variational Auto Encoder [3.1148846501645084]
We propose a neuromorphic gesture analysis system which naturally declutters the background and analyzes gestures at high temporal resolution.
Our results show that the features learned by the VAE provides a similarity measure capable of clustering and pseudo labeling of new gestures.
arXiv Detail & Related papers (2021-03-31T23:58:34Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Towards High Performance Human Keypoint Detection [87.1034745775229]
We find that context information plays an important role in reasoning human body configuration and invisible keypoints.
Inspired by this, we propose a cascaded context mixer ( CCM) which efficiently integrates spatial and channel context information.
To maximize CCM's representation capability, we develop a hard-negative person detection mining strategy and a joint-training strategy.
We present several sub-pixel refinement techniques for postprocessing keypoint predictions to improve detection accuracy.
arXiv Detail & Related papers (2020-02-03T02:24:51Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.