Recognizing American Sign Language Nonmanual Signal Grammar Errors in
Continuous Videos
- URL: http://arxiv.org/abs/2005.00253v1
- Date: Fri, 1 May 2020 07:25:07 GMT
- Title: Recognizing American Sign Language Nonmanual Signal Grammar Errors in
Continuous Videos
- Authors: Elahe Vahdani, Longlong Jing, Yingli Tian, Matt Huenerfauth
- Abstract summary: This paper introduces a near real-time system to recognize grammatical errors in continuous signing videos.
Our system automatically recognizes if performance of ASL sentences contains grammatical errors made by ASL students.
- Score: 38.14850006590712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As part of the development of an educational tool that can help students
achieve fluency in American Sign Language (ASL) through independent and
interactive practice with immediate feedback, this paper introduces a near
real-time system to recognize grammatical errors in continuous signing videos
without necessarily identifying the entire sequence of signs. Our system
automatically recognizes if performance of ASL sentences contains grammatical
errors made by ASL students. We first recognize the ASL grammatical elements
including both manual gestures and nonmanual signals independently from
multiple modalities (i.e. hand gestures, facial expressions, and head
movements) by 3D-ResNet networks. Then the temporal boundaries of grammatical
elements from different modalities are examined to detect ASL grammatical
mistakes by using a sliding window-based approach. We have collected a dataset
of continuous sign language, ASL-HW-RGBD, covering different aspects of ASL
grammars for training and testing. Our system is able to recognize grammatical
elements on ASL-HW-RGBD from manual gestures, facial expressions, and head
movements and successfully detect 8 ASL grammatical mistakes.
Related papers
- Enhanced Sign Language Translation between American Sign Language (ASL) and Indian Sign Language (ISL) Using LLMs [0.2678472239880052]
We have come up with a research that hopes to provide a bridge between the users of American Sign Language and the users of spoken language and Indian Sign Language (ISL)
This framework is tasked with key challenges such as automatically dealing with gesture variability and overcoming the linguistic differences between ASL and ISL.
arXiv Detail & Related papers (2024-11-19T17:45:12Z) - The American Sign Language Knowledge Graph: Infusing ASL Models with Linguistic Knowledge [6.481946043182915]
We introduce the American Sign Language Knowledge Graph (ASLKG), compiled from twelve sources of expert linguistic knowledge.
We use the ASLKG to train neuro-symbolic models for 3 ASL understanding tasks, achieving accuracies of 91% on ISR, 14% for predicting the semantic features of unseen signs, and 36% for classifying the topic of Youtube-ASL videos.
arXiv Detail & Related papers (2024-11-06T00:16:16Z) - FLEURS-ASL: Including American Sign Language in Massively Multilingual Multitask Evaluation [0.9790236766474201]
We introduce FLEURS-ASL, an extension of the multiway parallel benchmarks FLORES (for text) and FLEURS (for speech)
FLEURS-ASL can be used to evaluate a variety of tasks between ASL and 200 other languages as text, or 102 languages as speech.
We provide baselines for tasks from ASL to English text using a unified modeling approach that incorporates timestamp tokens and previous text tokens in a 34-second context window.
We also use FLEURS-ASL to show that multimodal frontier models have virtually no understanding of ASL, underscoring the importance of including sign languages in
arXiv Detail & Related papers (2024-08-24T13:59:41Z) - Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos [85.61513254261523]
Previous fingerspelling recognition methods have not focused on British Sign Language (BSL)
In contrast to previous methods, our method only uses weak annotations from subtitles for training.
We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities.
arXiv Detail & Related papers (2022-11-16T15:02:36Z) - ASL-Homework-RGBD Dataset: An annotated dataset of 45 fluent and
non-fluent signers performing American Sign Language homeworks [32.3809065803553]
This dataset contains videos of fluent and non-fluent signers using American Sign Language (ASL)
A total of 45 fluent and non-fluent participants were asked to perform signing homework assignments.
The data is annotated to identify several aspects of signing including grammatical features and non-manual markers.
arXiv Detail & Related papers (2022-07-08T17:18:49Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Watch, read and lookup: learning to spot signs from multiple supervisors [99.50956498009094]
Given a video of an isolated sign, our task is to identify whether and where it has been signed in a continuous, co-articulated sign language video.
We train a model using multiple types of available supervision by: (1) watching existing sparsely labelled footage; (2) reading associated subtitles which provide additional weak-supervision; and (3) looking up words in visual sign language dictionaries.
These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning.
arXiv Detail & Related papers (2020-10-08T14:12:56Z) - Transferring Cross-domain Knowledge for Video Sign Language Recognition [103.9216648495958]
Word-level sign language recognition (WSLR) is a fundamental task in sign language interpretation.
We propose a novel method that learns domain-invariant visual concepts and fertilizes WSLR models by transferring knowledge of subtitled news sign to them.
arXiv Detail & Related papers (2020-03-08T03:05:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.