Learning Sign Language Representation using CNN LSTM, 3DCNN, CNN RNN LSTM and CCN TD
- URL: http://arxiv.org/abs/2412.18187v1
- Date: Tue, 24 Dec 2024 05:47:08 GMT
- Title: Learning Sign Language Representation using CNN LSTM, 3DCNN, CNN RNN LSTM and CCN TD
- Authors: Nikita Louison, Wayne Goodridge, Koffka Khan,
- Abstract summary: The aim of this paper is to evaluate and identify the best neural network algorithm that can facilitate a sign language tuition system.
The 3DCNN algorithm was found to be the best performing neural network algorithm from these systems with 91% accuracy in the TTSL dataset and 83% accuracy in the ASL dataset.
- Score: 1.2494184403263338
- License:
- Abstract: Existing Sign Language Learning applications focus on the demonstration of the sign in the hope that the student will copy a sign correctly. In these cases, only a teacher can confirm that the sign was completed correctly, by reviewing a video captured manually. Sign Language Translation is a widely explored field in visual recognition. This paper seeks to explore the algorithms that will allow for real-time, video sign translation, and grading of sign language accuracy for new sign language users. This required algorithms capable of recognizing and processing spatial and temporal features. The aim of this paper is to evaluate and identify the best neural network algorithm that can facilitate a sign language tuition system of this nature. Modern popular algorithms including CNN and 3DCNN are compared on a dataset not yet explored, Trinidad and Tobago Sign Language as well as an American Sign Language dataset. The 3DCNN algorithm was found to be the best performing neural network algorithm from these systems with 91% accuracy in the TTSL dataset and 83% accuracy in the ASL dataset.
Related papers
- Signs as Tokens: An Autoregressive Multilingual Sign Language Generator [55.94334001112357]
We introduce a multilingual sign language model, Signs as Tokens (SOKE), which can generate 3D sign avatars autoregressively from text inputs.
To align sign language with the LM, we develop a decoupled tokenizer that discretizes continuous signs into token sequences representing various body parts.
These sign tokens are integrated into the raw text vocabulary of the LM, allowing for supervised fine-tuning on sign language datasets.
arXiv Detail & Related papers (2024-11-26T18:28:09Z) - Enhancing Bidirectional Sign Language Communication: Integrating YOLOv8 and NLP for Real-Time Gesture Recognition & Translation [1.08935184607501]
We have used the You Only Look Once(YOLO) model and Convolutional Neural Network (CNN) model.
YOLO model is run in real time and automatically extracts discriminative spatial-temporal characteristics from the raw video stream.
CNN model here is also run in real time for sign language detection.
arXiv Detail & Related papers (2024-11-18T19:55:11Z) - Deep Neural Network-Based Sign Language Recognition: A Comprehensive Approach Using Transfer Learning with Explainability [0.0]
We suggest a novel solution that uses a deep neural network to fully automate sign language recognition.
This methodology integrates sophisticated preprocessing methodologies to optimise the overall performance.
Our model's ability to provide informational clarity was assessed using the SHAP (SHapley Additive exPlanations) method.
arXiv Detail & Related papers (2024-09-11T17:17:44Z) - T2S-GPT: Dynamic Vector Quantization for Autoregressive Sign Language Production from Text [59.57676466961787]
We propose a novel dynamic vector quantization (DVA-VAE) model that can adjust the encoding length based on the information density in sign language.
Experiments conducted on the PHOENIX14T dataset demonstrate the effectiveness of our proposed method.
We propose a new large German sign language dataset, PHOENIX-News, which contains 486 hours of sign language videos, audio, and transcription texts.
arXiv Detail & Related papers (2024-06-11T10:06:53Z) - Enhancing Sign Language Detection through Mediapipe and Convolutional Neural Networks (CNN) [3.192629447369627]
This research combines MediaPipe and CNNs for the efficient and accurate interpretation of ASL dataset.
The accuracy achieved by the model on ASL datasets is 99.12%.
The system will have applications in the communication, education, and accessibility domains.
arXiv Detail & Related papers (2024-06-06T04:05:12Z) - LSA-T: The first continuous Argentinian Sign Language dataset for Sign
Language Translation [52.87578398308052]
Sign language translation (SLT) is an active field of study that encompasses human-computer interaction, computer vision, natural language processing and machine learning.
This paper presents the first continuous Argentinian Sign Language (LSA) dataset.
It contains 14,880 sentence level videos of LSA extracted from the CN Sordos YouTube channel with labels and keypoints annotations for each signer.
arXiv Detail & Related papers (2022-11-14T14:46:44Z) - Multi-View Spatial-Temporal Network for Continuous Sign Language
Recognition [0.76146285961466]
This paper proposes a multi-view spatial-temporal continuous sign language recognition network.
It is tested on two public sign language datasets SLR-100 and PHOENIX-Weather 2014T (RWTH)
arXiv Detail & Related papers (2022-04-19T08:43:03Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - TSPNet: Hierarchical Feature Learning via Temporal Semantic Pyramid for
Sign Language Translation [101.6042317204022]
Sign language translation (SLT) aims to interpret sign video sequences into text-based natural language sentences.
Existing SLT models usually represent sign visual features in a frame-wise manner.
We develop a novel hierarchical sign video feature learning method via a temporal semantic pyramid network, called TSPNet.
arXiv Detail & Related papers (2020-10-12T05:58:09Z) - Transferring Cross-domain Knowledge for Video Sign Language Recognition [103.9216648495958]
Word-level sign language recognition (WSLR) is a fundamental task in sign language interpretation.
We propose a novel method that learns domain-invariant visual concepts and fertilizes WSLR models by transferring knowledge of subtitled news sign to them.
arXiv Detail & Related papers (2020-03-08T03:05:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.