Fingerspelling within Sign Language Translation
- URL: http://arxiv.org/abs/2408.07065v1
- Date: Tue, 13 Aug 2024 17:57:14 GMT
- Title: Fingerspelling within Sign Language Translation
- Authors: Garrett Tanzer,
- Abstract summary: Fingerspelling poses challenges for sign language processing due to its high-frequency motion and use for open-vocabulary terms.
We evaluate how well sign language translation models understand fingerspelling in the context of entire sentences.
- Score: 0.9790236766474201
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Fingerspelling poses challenges for sign language processing due to its high-frequency motion and use for open-vocabulary terms. While prior work has studied fingerspelling recognition, there has been little attention to evaluating how well sign language translation models understand fingerspelling in the context of entire sentences -- and improving this capability. We manually annotate instances of fingerspelling within FLEURS-ASL and use them to evaluate the effect of two simple measures to improve fingerspelling recognition within American Sign Language to English translation: 1) use a model family (ByT5) with character- rather than subword-level tokenization, and 2) mix fingerspelling recognition data into the translation training mixture. We find that 1) substantially improves understanding of fingerspelling (and therefore translation quality overall), but the effect of 2) is mixed.
Related papers
- Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - Crossing the Threshold: Idiomatic Machine Translation through Retrieval
Augmentation and Loss Weighting [66.02718577386426]
We provide a simple characterization of idiomatic translation and related issues.
We conduct a synthetic experiment revealing a tipping point at which transformer-based machine translation models correctly default to idiomatic translations.
To improve translation of natural idioms, we introduce two straightforward yet effective techniques.
arXiv Detail & Related papers (2023-10-10T23:47:25Z) - Toward American Sign Language Processing in the Real World: Data, Tasks,
and Methods [15.77894358993113]
I study automatic sign language processing in the wild, using signing videos collected from the Internet.
I present three new large-scale ASL datasets in the wild: ChicagoFSWild, ChicagoFSWild+, and OpenASL.
I propose two tasks for building real-world fingerspelling-based applications: fingerspelling detection and search.
arXiv Detail & Related papers (2023-08-23T20:38:19Z) - On the Importance of Signer Overlap for Sign Language Detection [65.26091369630547]
We argue that the current benchmark data sets for sign language detection estimate overly positive results that do not generalize well.
We quantify this with a detailed analysis of the effect of signer overlap on current sign detection benchmark data sets.
We propose new data set partitions that are free of overlap and allow for more realistic performance assessment.
arXiv Detail & Related papers (2023-03-19T22:15:05Z) - Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos [85.61513254261523]
Previous fingerspelling recognition methods have not focused on British Sign Language (BSL)
In contrast to previous methods, our method only uses weak annotations from subtitles for training.
We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities.
arXiv Detail & Related papers (2022-11-16T15:02:36Z) - Searching for fingerspelled content in American Sign Language [32.89182994277633]
Natural language processing for sign language video is crucial for making artificial intelligence technologies accessible to deaf individuals.
In this paper, we address the problem of searching for fingerspelled key-words or key phrases in raw sign language videos.
We propose an end-to-end model for this task, FSS-Net, that jointly detects fingerspelling and matches it to a text sequence.
arXiv Detail & Related papers (2022-03-24T18:36:22Z) - A Fine-Grained Visual Attention Approach for Fingerspelling Recognition
in the Wild [17.8181080354116]
Automatic recognition of fingerspelling can help resolve communication barriers when interacting with deaf people.
Main challenges prevalent in fingerspelling recognition are the ambiguity in the gestures and strong articulation of the hands.
We propose a fine-grained visual attention mechanism using the Transformer model for the sequence-to-sequence prediction task in the wild dataset.
arXiv Detail & Related papers (2021-05-17T06:15:35Z) - Fingerspelling Detection in American Sign Language [32.79935314131377]
We consider the task of fingerspelling detection in raw, untrimmed sign language videos.
This is an important step towards building real-world fingerspelling recognition systems.
We propose a benchmark and a suite of evaluation metrics, some of which reflect the effect of detection on the downstream fingerspelling recognition task.
arXiv Detail & Related papers (2021-04-03T02:11:09Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Sign Language Transformers: Joint End-to-end Sign Language Recognition
and Translation [59.38247587308604]
We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation.
We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T dataset.
Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models.
arXiv Detail & Related papers (2020-03-30T21:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.