LSA64: An Argentinian Sign Language Dataset
- URL: http://arxiv.org/abs/2310.17429v1
- Date: Thu, 26 Oct 2023 14:37:01 GMT
- Title: LSA64: An Argentinian Sign Language Dataset
- Authors: Franco Ronchetti, Facundo Manuel Quiroga, C\'esar Estrebou, Laura
Lanzarini, Alejandro Rosete
- Abstract summary: This paper presents a dataset of 64 signs from the Argentinian Sign Language (LSA)
The dataset, called LSA64, contains 3200 videos of 64 different LSA signs recorded by 10 subjects.
We also present a pre-processed version of the dataset, from which we computed statistics of movement, position and handshape of the signs.
- Score: 42.27617228521691
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic sign language recognition is a research area that encompasses
human-computer interaction, computer vision and machine learning. Robust
automatic recognition of sign language could assist in the translation process
and the integration of hearing-impaired people, as well as the teaching of sign
language to the hearing population. Sign languages differ significantly in
different countries and even regions, and their syntax and semantics are
different as well from those of written languages. While the techniques for
automatic sign language recognition are mostly the same for different
languages, training a recognition system for a new language requires having an
entire dataset for that language. This paper presents a dataset of 64 signs
from the Argentinian Sign Language (LSA). The dataset, called LSA64, contains
3200 videos of 64 different LSA signs recorded by 10 subjects, and is a first
step towards building a comprehensive research-level dataset of Argentinian
signs, specifically tailored to sign language recognition or other machine
learning tasks. The subjects that performed the signs wore colored gloves to
ease the hand tracking and segmentation steps, allowing experiments on the
dataset to focus specifically on the recognition of signs. We also present a
pre-processed version of the dataset, from which we computed statistics of
movement, position and handshape of the signs.
Related papers
- SCOPE: Sign Language Contextual Processing with Embedding from LLMs [49.5629738637893]
Sign languages, used by around 70 million Deaf individuals globally, are visual languages that convey visual and contextual information.
Current methods in vision-based sign language recognition ( SLR) and translation (SLT) struggle with dialogue scenes due to limited dataset diversity and the neglect of contextually relevant information.
We introduce SCOPE, a novel context-aware vision-based SLR and SLT framework.
arXiv Detail & Related papers (2024-09-02T08:56:12Z) - Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - Sign Languague Recognition without frame-sequencing constraints: A proof
of concept on the Argentinian Sign Language [42.27617228521691]
This paper presents a general probabilistic model for sign classification that combines sub-classifiers based on different types of features.
The proposed model achieved an accuracy rate of 97% on an Argentinian Sign Language dataset.
arXiv Detail & Related papers (2023-10-26T14:47:11Z) - Handshape recognition for Argentinian Sign Language using ProbSom [0.3124884279860061]
This paper offers two main contributions: first, the creation of a database of handshapes for the Argentinian Sign Language (LSA), which is a topic that has barely been discussed so far.
Secondly, a technique for image processing, descriptor extraction and subsequent handshape classification using a supervised adaptation of self-organizing maps that is called ProbSom.
The database that was built contains 800 images with 16 LSA handshapes, and is a first step towards building a comprehensive database of Argentinian signs.
arXiv Detail & Related papers (2023-10-26T14:32:44Z) - Slovo: Russian Sign Language Dataset [83.93252084624997]
This paper presents the Russian Sign Language (RSL) video dataset Slovo, produced using crowdsourcing platforms.
The dataset contains 20,000 FullHD recordings, divided into 1,000 classes of isolated RSL gestures received by 194 signers.
arXiv Detail & Related papers (2023-05-23T21:00:42Z) - ASL Citizen: A Community-Sourced Dataset for Advancing Isolated Sign
Language Recognition [6.296362537531586]
Sign languages are used as a primary language by approximately 70 million D/deaf people world-wide.
To help tackle this problem, we release ASL Citizen, the first crowdsourced Isolated Sign Language Recognition dataset.
We propose that this dataset be used for sign language dictionary retrieval for American Sign Language (ASL), where a user demonstrates a sign to their webcam to retrieve matching signs from a dictionary.
arXiv Detail & Related papers (2023-04-12T15:52:53Z) - LSA-T: The first continuous Argentinian Sign Language dataset for Sign
Language Translation [52.87578398308052]
Sign language translation (SLT) is an active field of study that encompasses human-computer interaction, computer vision, natural language processing and machine learning.
This paper presents the first continuous Argentinian Sign Language (LSA) dataset.
It contains 14,880 sentence level videos of LSA extracted from the CN Sordos YouTube channel with labels and keypoints annotations for each signer.
arXiv Detail & Related papers (2022-11-14T14:46:44Z) - ASL-Homework-RGBD Dataset: An annotated dataset of 45 fluent and
non-fluent signers performing American Sign Language homeworks [32.3809065803553]
This dataset contains videos of fluent and non-fluent signers using American Sign Language (ASL)
A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments.
The data is annotated to identify several aspects of signing including grammatical features and non-manual markers.
arXiv Detail & Related papers (2022-07-08T17:18:49Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.