Simultaneous prediction of hand gestures, handedness, and hand keypoints
using thermal images
- URL: http://arxiv.org/abs/2303.01547v1
- Date: Thu, 2 Mar 2023 19:25:40 GMT
- Title: Simultaneous prediction of hand gestures, handedness, and hand keypoints
using thermal images
- Authors: Sichao Li, Sean Banerjee, Natasha Kholgade Banerjee, Soumyabrata Dey
- Abstract summary: We propose a technique for simultaneous hand gesture classification, handedness detection, and hand keypoints localization using thermal data captured by an infrared camera.
Our method uses a novel deep multi-task learning architecture that includes shared encoderdecoder layers followed by three branches dedicated for each mentioned task.
- Score: 0.6087960723103347
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Hand gesture detection is a well-explored area in computer vision with
applications in various forms of Human-Computer Interactions. In this work, we
propose a technique for simultaneous hand gesture classification, handedness
detection, and hand keypoints localization using thermal data captured by an
infrared camera. Our method uses a novel deep multi-task learning architecture
that includes shared encoderdecoder layers followed by three branches dedicated
for each mentioned task. We performed extensive experimental validation of our
model on an in-house dataset consisting of 24 users data. The results confirm
higher than 98 percent accuracy for gesture classification, handedness
detection, and fingertips localization, and more than 91 percent accuracy for
wrist points localization.
Related papers
- Multi-fingered Robotic Hand Grasping in Cluttered Environments through Hand-object Contact Semantic Mapping [14.674925349389179]
We develop a method for generating multi-fingered hand grasp samples in cluttered settings through contact semantic map.
We also propose the multi-modal multi-fingered grasping dataset generation method.
arXiv Detail & Related papers (2024-04-12T23:11:36Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Real-Time Hand Gesture Identification in Thermal Images [0.0]
Our system is capable of handling multiple hand regions in a frame and process it fast for real-time applications.
We collected a new thermal image data set with 10 gestures and reported an end-to-end hand gesture recognition accuracy of 97%.
arXiv Detail & Related papers (2023-03-04T05:02:35Z) - Advancing 3D finger knuckle recognition via deep feature learning [51.871256510747465]
Contactless 3D finger knuckle patterns have emerged as an effective biometric identifier due to its discriminativeness, visibility from a distance, and convenience.
Recent research has developed a deep feature collaboration network which simultaneously incorporates intermediate features from deep neural networks with multiple scales.
This paper advances this approach by investigating the possibility of learning a discriminative feature vector with the least possible dimension for representing 3D finger knuckle images.
arXiv Detail & Related papers (2023-01-07T20:55:16Z) - Palm Vein Recognition via Multi-task Loss Function and Attention Layer [3.265773263570237]
In this paper, a convolutional neural network based on VGG-16 transfer learning fused attention mechanism is used as the feature extraction network on the infrared palm vein dataset.
In order to verify the robustness of the model, some experiments were carried out on datasets from different sources.
At the same time, the matching is with high efficiency which takes an average of 0.13 seconds per palm vein pair.
arXiv Detail & Related papers (2022-11-11T02:32:49Z) - Hand gesture detection in tests performed by older adults [9.00837522898458]
We are developing an online test that analyses hand movement features associated with ageing.
To obtain hand movement features, participants will be asked to perform a variety of hand gestures using their own computer cameras.
It is challenging to collect high quality hand movement video data, especially for older participants.
arXiv Detail & Related papers (2021-10-27T14:29:01Z) - HighlightMe: Detecting Highlights from Human-Centric Videos [52.84233165201391]
We present a domain- and user-preference-agnostic approach to detect highlightable excerpts from human-centric videos.
We use an autoencoder network equipped with spatial-temporal graph convolutions to detect human activities and interactions.
We observe a 4-12% improvement in the mean average precision of matching the human-annotated highlights over state-of-the-art methods.
arXiv Detail & Related papers (2021-10-05T01:18:15Z) - Learning to Disambiguate Strongly Interacting Hands via Probabilistic
Per-pixel Part Segmentation [84.28064034301445]
Self-similarity, and the resulting ambiguities in assigning pixel observations to the respective hands, is a major cause of the final 3D pose error.
We propose DIGIT, a novel method for estimating the 3D poses of two interacting hands from a single monocular image.
We experimentally show that the proposed approach achieves new state-of-the-art performance on the InterHand2.6M dataset.
arXiv Detail & Related papers (2021-07-01T13:28:02Z) - SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild [62.450907796261646]
Recognition of hand gestures can be performed directly from the stream of hand skeletons estimated by software.
Despite the recent advancements in gesture and action recognition from skeletons, it is unclear how well the current state-of-the-art techniques can perform in a real-world scenario.
This paper presents the results of the SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild contest.
arXiv Detail & Related papers (2021-06-21T10:57:49Z) - TapNet: The Design, Training, Implementation, and Applications of a
Multi-Task Learning CNN for Off-Screen Mobile Input [75.05709030478073]
We present the design, training, implementation and applications of TapNet, a multi-task network that detects tapping on the smartphone.
TapNet can jointly learn from data across devices and simultaneously recognize multiple tap properties, including tap direction and tap location.
arXiv Detail & Related papers (2021-02-18T00:45:41Z) - Gesture Recognition from Skeleton Data for Intuitive Human-Machine
Interaction [0.6875312133832077]
We propose an approach for segmentation and classification of dynamic gestures based on a set of handcrafted features.
The method for gesture recognition applies a sliding window, which extracts information from both the spatial and temporal dimensions.
At the end, the recognized gestures are used to interact with a collaborative robot.
arXiv Detail & Related papers (2020-08-26T11:28:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.