PerSign: Personalized Bangladeshi Sign Letters Synthesis
- URL: http://arxiv.org/abs/2209.14591v1
- Date: Thu, 29 Sep 2022 07:07:34 GMT
- Title: PerSign: Personalized Bangladeshi Sign Letters Synthesis
- Authors: Mohammad Imrul Jubair, Ali Ahnaf, Tashfiq Nahiyan Khan, Ullash
Bhattacharjee, Tanjila Joti
- Abstract summary: Bangladeshi Sign Language (BdSL) is tough to learn for general people.
We propose PerSign, a system that can reproduce a person's image by introducing sign gestures in it.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Bangladeshi Sign Language (BdSL) - like other sign languages - is tough to
learn for general people, especially when it comes to expressing letters. In
this poster, we propose PerSign, a system that can reproduce a person's image
by introducing sign gestures in it. We make this operation personalized, which
means the generated image keeps the person's initial image profile - face, skin
tone, attire, background - unchanged while altering the hand, palm, and finger
positions appropriately. We use an image-to-image translation technique and
build a corresponding unique dataset to accomplish the task. We believe the
translated image can reduce the communication gap between signers (person who
uses sign language) and non-signers without having prior knowledge of BdSL.
Related papers
- Continuous Sign Language Recognition System using Deep Learning with MediaPipe Holistic [1.9874264019909988]
Sign languages are the language of hearing-impaired people who use visuals for communication.
Approximately 300 sign languages are being practiced worldwide such as American Sign Language (ASL), Chinese Sign Language (CSL), Indian Sign Language (ISL)
arXiv Detail & Related papers (2024-11-07T08:19:39Z) - Pose-Based Sign Language Appearance Transfer [5.839722619084469]
We introduce a method for transferring the signer's appearance in sign language skeletal poses while preserving the sign content.
This approach improves pose-based rendering and sign stitching while obfuscating identity.
Our experiments show that while the method reduces signer identification accuracy, it slightly harms sign recognition performance.
arXiv Detail & Related papers (2024-10-17T15:33:54Z) - Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - New Capability to Look Up an ASL Sign from a Video Example [4.992008196032313]
We describe a new system, publicly shared on the Web, to enable lookup of a video of an ASL sign.
The user submits a video for analysis and is presented with the five most likely sign matches.
This video lookup is also integrated into our newest version of SignStream software to facilitate linguistic annotation of ASL video data.
arXiv Detail & Related papers (2024-07-18T15:14:35Z) - EvSign: Sign Language Recognition and Translation with Streaming Events [59.51655336911345]
Event camera could naturally perceive dynamic hand movements, providing rich manual clues for sign language tasks.
We propose efficient transformer-based framework for event-based SLR and SLT tasks.
Our method performs favorably against existing state-of-the-art approaches with only 0.34% computational cost.
arXiv Detail & Related papers (2024-07-17T14:16:35Z) - A Simple Baseline for Spoken Language to Sign Language Translation with 3D Avatars [49.60328609426056]
Spoken2Sign is a system for translating spoken languages into sign languages.
We present a simple baseline consisting of three steps: creating a gloss-video dictionary, estimating a 3D sign for each sign video, and training a Spoken2Sign model.
As far as we know, we are the first to present the Spoken2Sign task in an output format of 3D signs.
arXiv Detail & Related papers (2024-01-09T18:59:49Z) - Image-based Indian Sign Language Recognition: A Practical Review using
Deep Neural Networks [0.0]
This model is to develop a real-time word-level sign language recognition system that would translate sign language to text.
For this analysis, the user must be able to take pictures of hand movements using a web camera.
Our model is trained using a convolutional neural network (CNN), which is then utilized to recognize the images.
arXiv Detail & Related papers (2023-04-28T09:27:04Z) - Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos [85.61513254261523]
Previous fingerspelling recognition methods have not focused on British Sign Language (BSL)
In contrast to previous methods, our method only uses weak annotations from subtitles for training.
We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities.
arXiv Detail & Related papers (2022-11-16T15:02:36Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Everybody Sign Now: Translating Spoken Language to Photo Realistic Sign
Language Video [43.45785951443149]
To be truly understandable by Deaf communities, an automatic Sign Language Production system must generate a photo-realistic signer.
We propose SignGAN, the first SLP model to produce photo-realistic continuous sign language videos directly from spoken language.
A pose-conditioned human synthesis model is then introduced to generate a photo-realistic sign language video from the skeletal pose sequence.
arXiv Detail & Related papers (2020-11-19T14:31:06Z) - BSL-1K: Scaling up co-articulated sign language recognition using
mouthing cues [106.21067543021887]
We show how to use mouthing cues from signers to obtain high-quality annotations from video data.
The BSL-1K dataset is a collection of British Sign Language (BSL) signs of unprecedented scale.
arXiv Detail & Related papers (2020-07-23T16:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.