Gesture based Arabic Sign Language Recognition for Impaired People based
on Convolution Neural Network
- URL: http://arxiv.org/abs/2203.05602v1
- Date: Thu, 10 Mar 2022 19:36:04 GMT
- Title: Gesture based Arabic Sign Language Recognition for Impaired People based
on Convolution Neural Network
- Authors: Rady El Rwelli, Osama R. Shahin, Ahmed I. Taloba
- Abstract summary: The recognition of Arabic Sign Language (ArSL) has become a difficult study subject due to variations in Arabic Sign Language (ArSL)
The proposed system takes Arabic sign language hand gestures as input and outputs vocalized speech as output.
The results were recognized by 90% of the people.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Arabic Sign Language has endorsed outstanding research achievements for
identifying gestures and hand signs using the deep learning methodology. The
term "forms of communication" refers to the actions used by hearing-impaired
people to communicate. These actions are difficult for ordinary people to
comprehend. The recognition of Arabic Sign Language (ArSL) has become a
difficult study subject due to variations in Arabic Sign Language (ArSL) from
one territory to another and then within states. The Convolution Neural Network
has been encapsulated in the proposed system which is based on the machine
learning technique. For the recognition of the Arabic Sign Language, the
wearable sensor is utilized. This approach has been used a different system
that could suit all Arabic gestures. This could be used by the impaired people
of the local Arabic community. The research method has been used with
reasonable and moderate accuracy. A deep Convolutional network is initially
developed for feature extraction from the data gathered by the sensing devices.
These sensors can reliably recognize the Arabic sign language's 30 hand sign
letters. The hand movements in the dataset were captured using DG5-V hand
gloves with wearable sensors. For categorization purposes, the CNN technique is
used. The suggested system takes Arabic sign language hand gestures as input
and outputs vocalized speech as output. The results were recognized by 90% of
the people.
Related papers
- Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - A Transformer-Based Multi-Stream Approach for Isolated Iranian Sign Language Recognition [0.0]
This research aims to recognize Iranian Sign Language words with the help of the latest deep learning tools such as transformers.
The dataset used includes 101 Iranian Sign Language words frequently used in academic environments such as universities.
arXiv Detail & Related papers (2024-06-27T06:54:25Z) - Self-Supervised Representation Learning with Spatial-Temporal Consistency for Sign Language Recognition [96.62264528407863]
We propose a self-supervised contrastive learning framework to excavate rich context via spatial-temporal consistency.
Inspired by the complementary property of motion and joint modalities, we first introduce first-order motion information into sign language modeling.
Our method is evaluated with extensive experiments on four public benchmarks, and achieves new state-of-the-art performance with a notable margin.
arXiv Detail & Related papers (2024-06-15T04:50:19Z) - Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks [0.0]
This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
arXiv Detail & Related papers (2023-04-28T09:27:04Z) - Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos [85.61513254261523]
Previous fingerspelling recognition methods have not focused on British Sign Language (BSL)
In contrast to previous methods, our method only uses weak annotations from subtitles for training.
We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities.
arXiv Detail & Related papers (2022-11-16T15:02:36Z) - Sign Language Recognition System using TensorFlow Object Detection API [0.0]
In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a model to create a real-time Sign Language Recognition system.
The system achieves a good level of accuracy even with a limited size dataset.
arXiv Detail & Related papers (2022-01-05T07:13:03Z) - Egyptian Sign Language Recognition Using CNN and LSTM [0.0]
We present a computer vision system with two different neural networks architectures.
The two models achieved an accuracy of 90% and 72%, respectively.
We examined the power of these two architectures to distinguish between 9 common words (with similar signs) among some deaf people community in Egypt.
arXiv Detail & Related papers (2021-07-28T21:33:35Z) - Sign Language Production: A Review [51.07720650677784]
Sign Language is the dominant yet non-primary form of communication language used in the deaf and hearing-impaired community.
To make an easy and mutual communication between the hearing-impaired and the hearing communities, building a robust system capable of translating the spoken language into sign language is fundamental.
To this end, sign language recognition and production are two necessary parts for making such a two-way system.
arXiv Detail & Related papers (2021-03-29T19:38:22Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Acoustics Based Intent Recognition Using Discovered Phonetic Units for
Low Resource Languages [51.0542215642794]
We propose a novel acoustics based intent recognition system that uses discovered phonetic units for intent classification.
We present results for two languages families - Indic languages and Romance languages, for two different intent recognition tasks.
arXiv Detail & Related papers (2020-11-07T00:35:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.