Real-time Indian Sign Language (ISL) Recognition
- URL: http://arxiv.org/abs/2108.10970v1
- Date: Tue, 24 Aug 2021 21:49:21 GMT
- Title: Real-time Indian Sign Language (ISL) Recognition
- Authors: Kartik Shenoy, Tejas Dastane, Varun Rao, Devendra Vyavaharkar
- Abstract summary: This paper presents a system which can recognise hand poses & gestures from the Indian Sign Language (ISL) in real-time.
The existing solutions either provide relatively low accuracy or do not work in real-time.
It can identify 33 hand poses and some gestures from the ISL.
- Score: 0.45880283710344055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper presents a system which can recognise hand poses & gestures from
the Indian Sign Language (ISL) in real-time using grid-based features. This
system attempts to bridge the communication gap between the hearing and speech
impaired and the rest of the society. The existing solutions either provide
relatively low accuracy or do not work in real-time. This system provides good
results on both the parameters. It can identify 33 hand poses and some gestures
from the ISL. Sign Language is captured from a smartphone camera and its frames
are transmitted to a remote server for processing. The use of any external
hardware (such as gloves or the Microsoft Kinect sensor) is avoided, making it
user-friendly. Techniques such as Face detection, Object stabilisation and Skin
Colour Segmentation are used for hand detection and tracking. The image is
further subjected to a Grid-based Feature Extraction technique which represents
the hand's pose in the form of a Feature Vector. Hand poses are then classified
using the k-Nearest Neighbours algorithm. On the other hand, for gesture
classification, the motion and intermediate hand poses observation sequences
are fed to Hidden Markov Model chains corresponding to the 12 pre-selected
gestures defined in ISL. Using this methodology, the system is able to achieve
an accuracy of 99.7% for static hand poses, and an accuracy of 97.23% for
gesture recognition.
Related papers
- EvSign: Sign Language Recognition and Translation with Streaming Events [59.51655336911345]
Event camera could naturally perceive dynamic hand movements, providing rich manual clues for sign language tasks.
We propose efficient transformer-based framework for event-based SLR and SLT tasks.
Our method performs favorably against existing state-of-the-art approaches with only 0.34% computational cost.
arXiv Detail & Related papers (2024-07-17T14:16:35Z) - Sign Language Recognition Based On Facial Expression and Hand Skeleton [2.5879170041667523]
We propose a sign language recognition network that integrates skeleton features of hands and facial expression.
By incorporating facial expression information, the accuracy and robustness of sign language recognition are improved.
arXiv Detail & Related papers (2024-07-02T13:02:51Z) - Enhancing Sign Language Detection through Mediapipe and Convolutional Neural Networks (CNN) [3.192629447369627]
This research combines MediaPipe and CNNs for the efficient and accurate interpretation of ASL dataset.
The accuracy achieved by the model on ASL datasets is 99.12%.
The system will have applications in the communication, education, and accessibility domains.
arXiv Detail & Related papers (2024-06-06T04:05:12Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks [0.0]
This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
arXiv Detail & Related papers (2023-04-28T09:27:04Z) - Reconstructing Signing Avatars From Video Using Linguistic Priors [54.5282429129769]
Sign language (SL) is the primary method of communication for the 70 million Deaf people around the world.
replacing video dictionaries of isolated signs with 3D avatars can aid learning and enable AR/VR applications.
SGNify captures fine-grained hand pose, facial expression, and body movement fully automatically from in-the-wild monocular SL videos.
arXiv Detail & Related papers (2023-04-20T17:29:50Z) - HaGRID - HAnd Gesture Recognition Image Dataset [79.21033185563167]
This paper introduces an enormous dataset, HaGRID, to build a hand gesture recognition system concentrating on interaction with devices to manage them.
Although the gestures are static, they were picked up, especially for the ability to design several dynamic gestures.
The HaGRID contains 554,800 images and bounding box annotations with gesture labels to solve hand detection and gesture classification tasks.
arXiv Detail & Related papers (2022-06-16T14:41:32Z) - SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild [62.450907796261646]
Recognition of hand gestures can be performed directly from the stream of hand skeletons estimated by software.
Despite the recent advancements in gesture and action recognition from skeletons, it is unclear how well the current state-of-the-art techniques can perform in a real-world scenario.
This paper presents the results of the SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild contest.
arXiv Detail & Related papers (2021-06-21T10:57:49Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Understanding the hand-gestures using Convolutional Neural Networks and
Generative Adversial Networks [0.0]
The system consists of three modules: real time hand tracking, training gesture and gesture recognition using Convolutional Neural Networks.
It has been tested to the vocabulary of 36 gestures including the alphabets and digits, and results effectiveness of the approach.
arXiv Detail & Related papers (2020-11-10T02:20:43Z) - FineHand: Learning Hand Shapes for American Sign Language Recognition [16.862375555609667]
We present an approach for effective learning of hand shape embeddings, which are discriminative for ASL gestures.
For hand shape recognition our method uses a mix of manually labelled hand shapes and high confidence predictions to train deep convolutional neural network (CNN)
We will demonstrate that higher quality hand shape models can significantly improve the accuracy of final video gesture classification.
arXiv Detail & Related papers (2020-03-04T23:32:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.