BdSLW60: A Word-Level Bangla Sign Language Dataset
- URL: http://arxiv.org/abs/2402.08635v1
- Date: Tue, 13 Feb 2024 18:02:58 GMT
- Title: BdSLW60: A Word-Level Bangla Sign Language Dataset
- Authors: Husne Ara Rubaiyeat, Hasan Mahmud, Ahsan Habib, Md. Kamrul Hasan
- Abstract summary: We create a comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and natural setting.
The dataset encompasses 60 Bangla sign words, with a significant scale of 9307 video trials provided by 18 signers under the supervision of a sign language professional.
We report the benchmarking of our BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%.
- Score: 3.8631510994883254
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sign language discourse is an essential mode of daily communication for the
deaf and hard-of-hearing people. However, research on Bangla Sign Language
(BdSL) faces notable limitations, primarily due to the lack of datasets.
Recognizing wordlevel signs in BdSL (WL-BdSL) presents a multitude of
challenges, including the need for well-annotated datasets, capturing the
dynamic nature of sign gestures from facial or hand landmarks, developing
suitable machine learning or deep learning-based models with substantial video
samples, and so on. In this paper, we address these challenges by creating a
comprehensive BdSL word-level dataset named BdSLW60 in an unconstrained and
natural setting, allowing positional and temporal variations and allowing sign
users to change hand dominance freely. The dataset encompasses 60 Bangla sign
words, with a significant scale of 9307 video trials provided by 18 signers
under the supervision of a sign language professional. The dataset was
rigorously annotated and cross-checked by 60 annotators. We also introduced a
unique approach of a relative quantization-based key frame encoding technique
for landmark based sign gesture recognition. We report the benchmarking of our
BdSLW60 dataset using the Support Vector Machine (SVM) with testing accuracy up
to 67.6% and an attention-based bi-LSTM with testing accuracy up to 75.1%. The
dataset is available at https://www.kaggle.com/datasets/hasaniut/bdslw60 and
the code base is accessible from https://github.com/hasanssl/BdSLW60_Code.
Related papers
- Bukva: Russian Sign Language Alphabet [75.42794328290088]
This paper investigates the recognition of the Russian fingerspelling alphabet, also known as the Russian Sign Language (RSL) dactyl.
Dactyl is a component of sign languages where distinct hand movements represent individual letters of a written language.
We provide Bukva, the first full-fledged open-source video dataset for RSL dactyl recognition.
arXiv Detail & Related papers (2024-10-11T09:59:48Z) - SCOPE: Sign Language Contextual Processing with Embedding from LLMs [49.5629738637893]
Sign languages, used by around 70 million Deaf individuals globally, are visual languages that convey visual and contextual information.
Current methods in vision-based sign language recognition ( SLR) and translation (SLT) struggle with dialogue scenes due to limited dataset diversity and the neglect of contextually relevant information.
We introduce SCOPE, a novel context-aware vision-based SLR and SLT framework.
arXiv Detail & Related papers (2024-09-02T08:56:12Z) - BAUST Lipi: A BdSL Dataset with Deep Learning Based Bangla Sign Language Recognition [0.5497663232622964]
Sign language research is burgeoning to enhance communication with the deaf community.
One significant barrier has been the lack of a comprehensive Bangla sign language dataset.
We introduce a new BdSL dataset comprising alphabets totaling 18,000 images, with each image being 224x224 pixels in size.
We devised a hybrid Convolutional Neural Network (CNN) model, integrating multiple convolutional layers, activation functions, dropout techniques, and LSTM layers.
arXiv Detail & Related papers (2024-08-20T03:35:42Z) - SignSpeak: Open-Source Time Series Classification for ASL Translation [0.12499537119440243]
We propose a low-cost, real-time ASL-to-speech translation glove and an exhaustive training dataset of sign language patterns.
We benchmarked this dataset with supervised learning models, such as LSTMs, GRUs and Transformers, where our best model achieved 92% accuracy.
Our open-source dataset, models and glove designs provide an accurate and efficient ASL translator while maintaining cost-effectiveness.
arXiv Detail & Related papers (2024-06-27T17:58:54Z) - Connecting the Dots: Leveraging Spatio-Temporal Graph Neural Networks
for Accurate Bangla Sign Language Recognition [2.624902795082451]
We present a new word-level Bangla Sign Language dataset - BdSL40 - consisting of 611 videos over 40 words.
This is the first study on word-level BdSL recognition, and the dataset was transcribed from Indian Sign Language (ISL) using the Bangla Sign Language Dictionary (1997).
The study highlights the significant lexical and semantic similarity between BdSL, West Bengal Sign Language, and ISL, and the lack of word-level datasets for BdSL in the literature.
arXiv Detail & Related papers (2024-01-22T18:52:51Z) - ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign
Language Recognition [6.296362537531586]
Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide.
To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition dataset.
We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary.
arXiv Detail & Related papers (2023-04-12T15:52:53Z) - ASL-Homework-RGBD Dataset: An annotated dataset of 45 fluent and
non-fluent signers performing American Sign Language homeworks [32.3809065803553]
This dataset contains videos of fluent and non-fluent signers using American Sign Language (ASL)
A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments.
The data is annotated to identify several aspects of signing including grammatical features and non-manual markers.
arXiv Detail & Related papers (2022-07-08T17:18:49Z) - BBC-Oxford British Sign Language Dataset [64.32108826673183]
We introduce the BBC-Oxford British Sign Language (BOBSL) dataset, a large-scale video collection of British Sign Language (BSL)
We describe the motivation for the dataset, together with statistics and available annotations.
We conduct experiments to provide baselines for the tasks of sign recognition, sign language alignment, and sign language translation.
arXiv Detail & Related papers (2021-11-05T17:35:58Z) - LeBenchmark: A Reproducible Framework for Assessing Self-Supervised
Representation Learning from Speech [63.84741259993937]
Self-Supervised Learning (SSL) using huge unlabeled data has been successfully explored for image and natural language processing.
Recent works also investigated SSL from speech.
We propose LeBenchmark: a reproducible framework for assessing SSL from speech.
arXiv Detail & Related papers (2021-04-23T08:27:09Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - BSL-1K: Scaling up co-articulated sign language recognition using
mouthing cues [106.21067543021887]
We show how to use mouthing cues from signers to obtain high-quality annotations from video data.
The BSL-1K dataset is a collection of British Sign Language (BSL) signs of unprecedented scale.
arXiv Detail & Related papers (2020-07-23T16:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.