Egyptian Sign Language Recognition Using CNN and LSTM
- URL: http://arxiv.org/abs/2107.13647v1
- Date: Wed, 28 Jul 2021 21:33:35 GMT
- Title: Egyptian Sign Language Recognition Using CNN and LSTM
- Authors: Ahmed Elhagry, Rawan Gla
- Abstract summary: We present a computer vision system with two different neural networks architectures.
The two models achieved an accuracy of 90% and 72%, respectively.
We examined the power of these two architectures to distinguish between 9 common words (with similar signs) among some deaf people community in Egypt.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sign language is a set of gestures that deaf people use to communicate.
Unfortunately, normal people don't understand it, which creates a communication
gap that needs to be filled. Because of the variations in (Egyptian Sign
Language) ESL from one region to another, ESL provides a challenging research
problem. In this work, we are providing applied research with its video-based
Egyptian sign language recognition system that serves the local community of
deaf people in Egypt, with a moderate and reasonable accuracy. We present a
computer vision system with two different neural networks architectures. The
first is a Convolutional Neural Network (CNN) for extracting spatial features.
The CNN model was retrained on the inception mod. The second architecture is a
CNN followed by a Long Short-Term Memory (LSTM) for extracting both spatial and
temporal features. The two models achieved an accuracy of 90% and 72%,
respectively. We examined the power of these two architectures to distinguish
between 9 common words (with similar signs) among some deaf people community in
Egypt.
Related papers
- Neural Sign Actors: A diffusion model for 3D sign language production from text [51.81647203840081]
Sign Languages (SL) serve as the primary mode of communication for the Deaf and Hard of Hearing communities.
This work makes an important step towards realistic neural sign avatars, bridging the communication gap between Deaf and hearing communities.
arXiv Detail & Related papers (2023-12-05T12:04:34Z) - Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks [0.0]
This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
arXiv Detail & Related papers (2023-04-28T09:27:04Z) - Toward a realistic model of speech processing in the brain with
self-supervised learning [67.7130239674153]
Self-supervised algorithms trained on the raw waveform constitute a promising candidate.
We show that Wav2Vec 2.0 learns brain-like representations with as little as 600 hours of unlabelled speech.
arXiv Detail & Related papers (2022-06-03T17:01:46Z) - Multi-View Spatial-Temporal Network for Continuous Sign Language
Recognition [0.76146285961466]
This paper proposes a multi-view spatial-temporal continuous sign language recognition network.
It is tested on two public sign language datasets SLR-100 and PHOENIX-Weather 2014T (RWTH)
arXiv Detail & Related papers (2022-04-19T08:43:03Z) - Gesture based Arabic Sign Language Recognition for Impaired People based
on Convolution Neural Network [0.0]
The recognition of Arabic Sign Language (ArSL) has become a difficult study subject due to variations in Arabic Sign Language (ArSL)
The proposed system takes Arabic sign language hand gestures as input and outputs vocalized speech as output.
The results were recognized by 90% of the people.
arXiv Detail & Related papers (2022-03-10T19:36:04Z) - Sign Language Recognition System using TensorFlow Object Detection API [0.0]
In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a model to create a real-time Sign Language Recognition system.
The system achieves a good level of accuracy even with a limited size dataset.
arXiv Detail & Related papers (2022-01-05T07:13:03Z) - Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble [71.97020373520922]
Sign language is commonly used by deaf or mute people to communicate.
We propose a novel Multi-modal Framework with a Global Ensemble Model (GEM) for isolated Sign Language Recognition ( SLR)
Our proposed SAM- SLR-v2 framework is exceedingly effective and achieves state-of-the-art performance with significant margins.
arXiv Detail & Related papers (2021-10-12T16:57:18Z) - Sexism detection: The first corpus in Algerian dialect with a
code-switching in Arabic/ French and English [0.3425341633647625]
A new hate speech corpus (Arabic_fr_en) is developed using three different annotators.
For corpus validation, three different machine learning algorithms are used, including deep Convolutional Neural Network (CNN), long short-term memory (LSTM) network and Bi-directional LSTM (Bi-LSTM) network.
Simulation results demonstrate the best performance of the CNN model, which achieved F1-score up to 86% for the unbalanced corpus.
arXiv Detail & Related papers (2021-04-03T16:34:51Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages [51.0542215642794]
We propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification.
We present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks.
arXiv Detail & Related papers (2020-11-07T00:35:31Z) - Video-based Person Re-Identification using Gated Convolutional Recurrent
Neural Networks [89.70701173600742]
In this paper, we introduce a novel gating mechanism to deep neural networks.
Our gating mechanism will learn which regions are helpful for person re-identification and let these regions pass the gate.
Experimental results on two major datasets demonstrate the performance improvements due to the proposed gating mechanism.
arXiv Detail & Related papers (2020-03-21T18:15:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.