Gesture Similarity Analysis on Event Data Using a Hybrid Guided
Variational Auto Encoder
- URL: http://arxiv.org/abs/2104.00165v1
- Date: Wed, 31 Mar 2021 23:58:34 GMT
- Title: Gesture Similarity Analysis on Event Data Using a Hybrid Guided
Variational Auto Encoder
- Authors: Kenneth Stewart, Andreea Danielescu, Lazar Supic, Timothy Shea, Emre
Neftci
- Abstract summary: We propose a neuromorphic gesture analysis system which naturally declutters the background and analyzes gestures at high temporal resolution.
Our results show that the features learned by the VAE provides a similarity measure capable of clustering and pseudo labeling of new gestures.
- Score: 3.1148846501645084
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While commercial mid-air gesture recognition systems have existed for at
least a decade, they have not become a widespread method of interacting with
machines. This is primarily due to the fact that these systems require rigid,
dramatic gestures to be performed for accurate recognition that can be
fatiguing and unnatural. The global pandemic has seen a resurgence of interest
in touchless interfaces, so new methods that allow for natural mid-air gestural
interactions are even more important. To address the limitations of recognition
systems, we propose a neuromorphic gesture analysis system which naturally
declutters the background and analyzes gestures at high temporal resolution.
Our novel model consists of an event-based guided Variational Autoencoder (VAE)
which encodes event-based data sensed by a Dynamic Vision Sensor (DVS) into a
latent space representation suitable to analyze and compute the similarity of
mid-air gesture data. Our results show that the features learned by the VAE
provides a similarity measure capable of clustering and pseudo labeling of new
gestures. Furthermore, we argue that the resulting event-based encoder and
pseudo-labeling system are suitable for implementation in neuromorphic hardware
for online adaptation and learning of natural mid-air gestures.
Related papers
- A Multi-label Classification Approach to Increase Expressivity of
EMG-based Gesture Recognition [4.701158597171363]
The aim of this study is to efficiently increase the expressivity of surface electromyography-based (sEMG) gesture recognition systems.
We use a problem transformation approach, in which actions were subset into two biomechanically independent components.
arXiv Detail & Related papers (2023-09-13T20:21:41Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Snapture -- A Novel Neural Architecture for Combined Static and Dynamic
Hand Gesture Recognition [19.320551882950706]
We propose a novel hybrid hand gesture recognition system.
Our architecture enables learning both static and dynamic gestures.
Our work contributes both to gesture recognition research and machine learning applications for non-verbal communication with robots.
arXiv Detail & Related papers (2022-05-28T11:12:38Z) - Towards Domain-Independent and Real-Time Gesture Recognition Using
mmWave Signal [11.76969975145963]
DI-Gesture is a domain-independent and real-time mmWave gesture recognition system.
In real-time scenario, the accuracy of DI-Gesutre reaches over 97% with average inference time of 2.87ms.
arXiv Detail & Related papers (2021-11-11T13:28:28Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - Learning Dynamical Systems from Noisy Sensor Measurements using Multiple
Shooting [11.771843031752269]
We introduce a generic and scalable method to learn latent representations of indirectly observed dynamical systems.
We achieve state-of-the-art performances on systems observed directly from raw images.
arXiv Detail & Related papers (2021-06-22T12:30:18Z) - Domain Adaptive Robotic Gesture Recognition with Unsupervised
Kinematic-Visual Data Alignment [60.31418655784291]
We propose a novel unsupervised domain adaptation framework which can simultaneously transfer multi-modality knowledge, i.e., both kinematic and visual data, from simulator to real robot.
It remedies the domain gap with enhanced transferable features by using temporal cues in videos, and inherent correlations in multi-modal towards recognizing gesture.
Results show that our approach recovers the performance with great improvement gains, up to 12.91% in ACC and 20.16% in F1score without using any annotations in real robot.
arXiv Detail & Related papers (2021-03-06T09:10:03Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - 3D dynamic hand gestures recognition using the Leap Motion sensor and
convolutional neural networks [0.0]
We present a method for the recognition of a set of non-static gestures acquired through the Leap Motion sensor.
The acquired gesture information is converted in color images, where the variation of hand joint positions during the gesture are projected on a plane.
The classification of the gestures is performed using a deep Convolutional Neural Network (CNN)
arXiv Detail & Related papers (2020-03-03T11:05:35Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.