Multilingual Communication System with Deaf Individuals Utilizing
Natural and Visual Languages
- URL: http://arxiv.org/abs/2212.00305v1
- Date: Thu, 1 Dec 2022 06:43:44 GMT
- Title: Multilingual Communication System with Deaf Individuals Utilizing
Natural and Visual Languages
- Authors: Tuan-Luc Huynh, Khoi-Nguyen Nguyen-Ngoc, Chi-Bien Chu, Minh-Triet
Tran, Trung-Nghia Le
- Abstract summary: We propose a novel multilingual communication system, namely MUGCAT, to improve the communication efficiency of sign language users.
By converting recognized specific hand gestures into expressive pictures, our MUGCAT system significantly helps deaf people convey their thoughts.
- Score: 12.369283590206628
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: According to the World Federation of the Deaf, more than two hundred sign
languages exist. Therefore, it is challenging to understand deaf individuals,
even proficient sign language users, resulting in a barrier between the deaf
community and the rest of society. To bridge this language barrier, we propose
a novel multilingual communication system, namely MUGCAT, to improve the
communication efficiency of sign language users. By converting recognized
specific hand gestures into expressive pictures, which is universal usage and
language independence, our MUGCAT system significantly helps deaf people convey
their thoughts. To overcome the limitation of sign language usage, which is
mostly impossible to translate into complete sentences for ordinary people, we
propose to reconstruct meaningful sentences from the incomplete translation of
sign language. We also measure the semantic similarity of generated sentences
with fragmented recognized hand gestures to keep the original meaning.
Experimental results show that the proposed system can work in a real-time
manner and synthesize exquisite stunning illustrations and meaningful sentences
from a few hand gestures of sign language. This proves that our MUGCAT has
promising potential in assisting deaf communication.
Related papers
- Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks [0.0]
This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
arXiv Detail & Related papers (2023-04-28T09:27:04Z) - Indian Sign Language Recognition Using Mediapipe Holistic [0.0]
We will create a robust system for sign language recognition in order to convert Indian Sign Language to text or speech.
The creation of a text-to-sign language paradigm is essential since it will enhance the sign language-dependent deaf and hard-of-hearing population's communication skills.
arXiv Detail & Related papers (2023-04-20T12:25:47Z) - Cross-Lingual Ability of Multilingual Masked Language Models: A Study of
Language Structure [54.01613740115601]
We study three language properties: constituent order, composition and word co-occurrence.
Our main conclusion is that the contribution of constituent order and word co-occurrence is limited, while the composition is more crucial to the success of cross-linguistic transfer.
arXiv Detail & Related papers (2022-03-16T07:09:35Z) - All You Need In Sign Language Production [50.3955314892191]
Sign language recognition and production need to cope with some critical challenges.
We present an introduction to the Deaf culture, Deaf centers, psychological perspective of sign language.
Also, the backbone architectures and methods in SLP are briefly introduced and the proposed taxonomy on SLP is presented.
arXiv Detail & Related papers (2022-01-05T13:45:09Z) - Sign Language Recognition System using TensorFlow Object Detection API [0.0]
In this paper, we propose a method to create an Indian Sign Language dataset using a webcam and then using transfer learning, train a model to create a real-time Sign Language Recognition system.
The system achieves a good level of accuracy even with a limited size dataset.
arXiv Detail & Related papers (2022-01-05T07:13:03Z) - Sign Language Production: A Review [51.07720650677784]
Sign Language is the dominant yet non-primary form of communication language used in the deaf and hearing-impaired community.
To make an easy and mutual communication between the hearing-impaired and the hearing communities, building a robust system capable of translating the spoken language into sign language is fundamental.
To this end, sign language recognition and production are two necessary parts for making such a two-way system.
arXiv Detail & Related papers (2021-03-29T19:38:22Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Novel Approach to Use HU Moments with Image Processing Techniques for
Real Time Sign Language Communication [0.0]
"Sign Language Communicator" (SLC) is designed to solve the language barrier between the sign language users and the rest of the world.
System is able to recognize selected Sign Language signs with the accuracy of 84% without a controlled background with small light adjustments.
arXiv Detail & Related papers (2020-07-20T03:10:18Z) - Sign Language Transformers: Joint End-to-end Sign Language Recognition
and Translation [59.38247587308604]
We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation.
We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T dataset.
Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models.
arXiv Detail & Related papers (2020-03-30T21:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.