Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks
- URL: http://arxiv.org/abs/2304.14710v1
- Date: Fri, 28 Apr 2023 09:27:04 GMT
- Title: Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks
- Authors: Mallikharjuna Rao K, Harleen Kaur, Sanjam Kaur Bedi, and M A Lekhana
- Abstract summary: This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: People with vocal and hearing disabilities use sign language to express
themselves using visual gestures and signs. Although sign language is a
solution for communication difficulties faced by deaf people, there are still
problems as most of the general population cannot understand this language,
creating a communication barrier, especially in places such as banks, airports,
supermarkets, etc. [1]. A sign language recognition(SLR) system is a must to
solve this problem. The main focus of this model is to develop a real-time
word-level sign language recognition system that would translate sign language
to text. Much research has been done on ASL(American sign language). Thus, we
have worked on ISL(Indian sign language) to cater to the needs of the deaf and
hard-of-hearing community of India[2]. In this research, we provide an Indian
Sign Language-based Sign Language recognition system. For this analysis, the
user must be able to take pictures of hand movements using a web camera, and
the system must anticipate and display the name of the taken picture. The
acquired image goes through several processing phases, some of which use
computer vision techniques, including grayscale conversion, dilatation, and
masking. Our model is trained using a convolutional neural network (CNN), which
is then utilized to recognize the images. Our best model has a 99% accuracy
rate[3].
Related papers
- Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - EvSign: Sign Language Recognition and Translation with Streaming Events [59.51655336911345]
Event camera could naturally perceive dynamic hand movements, providing rich manual clues for sign language tasks.
We propose efficient transformer-based framework for event-based SLR and SLT tasks.
Our method performs favorably against existing state-of-the-art approaches with only 0.34% computational cost.
arXiv Detail & Related papers (2024-07-17T14:16:35Z) - Enhancing Brazilian Sign Language Recognition through Skeleton Image Representation [2.6311088262657907]
This work proposes an Isolated Sign Language Recognition (ISLR) approach where body, hands, and facial landmarks are extracted throughout time and encoded as 2-D images.
We show that our method surpassed the state-of-the-art in terms of performance metrics on two widely recognized datasets in Brazilian Sign Language (LIBRAS)
In addition to being more accurate, our method is more time-efficient and easier to train due to its reliance on a simpler network architecture and solely RGB data as input.
arXiv Detail & Related papers (2024-04-29T23:21:17Z) - Neural Sign Actors: A diffusion model for 3D sign language production from text [51.81647203840081]
Sign Languages (SL) serve as the primary mode of communication for the Deaf and Hard of Hearing communities.
This work makes an important step towards realistic neural sign avatars, bridging the communication gap between Deaf and hearing communities.
arXiv Detail & Related papers (2023-12-05T12:04:34Z) - A Comparative Analysis of Techniques and Algorithms for Recognising Sign
Language [0.9311364633437358]
Sign language is frequently used as the primary form of communication by people with hearing loss.
It is necessary to create human-computer interface systems that can offer hearing-impaired people a social platform.
Most commercial sign language translation systems are sensor-based, pricey, and challenging to use.
arXiv Detail & Related papers (2023-05-05T10:52:18Z) - Indian Sign Language Recognition Using Mediapipe Holistic [0.0]
We will create a robust system for sign language recognition in order to convert Indian Sign Language to text or speech.
The creation of a text-to-sign language paradigm is essential since it will enhance the sign language-dependent deaf and hard-of-hearing population's communication skills.
arXiv Detail & Related papers (2023-04-20T12:25:47Z) - All You Need In Sign Language Production [50.3955314892191]
Sign language recognition and production need to cope with some critical challenges.
We present an introduction to the Deaf culture, Deaf centers, psychological perspective of sign language.
Also, the backbone architectures and methods in SLP are briefly introduced and the proposed taxonomy on SLP is presented.
arXiv Detail & Related papers (2022-01-05T13:45:09Z) - Sign Language Recognition System using TensorFlow Object Detection API [0.0]
In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a model to create a real-time Sign Language Recognition system.
The system achieves a good level of accuracy even with a limited size dataset.
arXiv Detail & Related papers (2022-01-05T07:13:03Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign
Language Video [43.45785951443149]
To be truly understandable by Deaf communities, an automatic Sign Language Production system must generate a photo-realistic signer.
We propose SignGAN, the first SLP model to produce photo-realistic continuous sign language videos directly from spoken language.
A pose-conditioned human synthesis model is then introduced to generate a photo-realistic sign language video from the skeletal pose sequence.
arXiv Detail & Related papers (2020-11-19T14:31:06Z) - Novel Approach to Use HU Moments with Image Processing Techniques for
Real Time Sign Language Communication [0.0]
"Sign Language Communicator" (SLC) is designed to solve the language barrier between the sign language users and the rest of the world.
System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
arXiv Detail & Related papers (2020-07-20T03:10:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.