Toward American Sign Language Processing in the Real World: Data, Tasks,
and Methods
- URL: http://arxiv.org/abs/2308.12419v1
- Date: Wed, 23 Aug 2023 20:38:19 GMT
- Title: Toward American Sign Language Processing in the Real World: Data, Tasks,
and Methods
- Authors: Bowen Shi
- Abstract summary: I study automatic sign language processing in the wild, using signing videos collected from the Internet.
I present three new large-scale ASL datasets in the wild: ChicagoFSWild, ChicagoFSWild+, and OpenASL.
I propose two tasks for building real-world fingerspelling-based applications: fingerspelling detection and search.
- Score: 15.77894358993113
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sign language, which conveys meaning through gestures, is the chief means of
communication among deaf people. Recognizing sign language in natural settings
presents significant challenges due to factors such as lighting, background
clutter, and variations in signer characteristics. In this thesis, I study
automatic sign language processing in the wild, using signing videos collected
from the Internet. This thesis contributes new datasets, tasks, and methods.
Most chapters of this thesis address tasks related to fingerspelling, an
important component of sign language and yet has not been studied widely by
prior work. I present three new large-scale ASL datasets in the wild:
ChicagoFSWild, ChicagoFSWild+, and OpenASL. Using ChicagoFSWild and
ChicagoFSWild+, I address fingerspelling recognition, which consists of
transcribing fingerspelling sequences into text. I propose an end-to-end
approach based on iterative attention that allows recognition from a raw video
without explicit hand detection. I further show that using a Conformer-based
network jointly modeling handshape and mouthing can bring performance close to
that of humans. Next, I propose two tasks for building real-world
fingerspelling-based applications: fingerspelling detection and search. For
fingerspelling detection, I introduce a suite of evaluation metrics and a new
detection model via multi-task training. To address the problem of searching
for fingerspelled keywords in raw sign language videos, we propose a novel
method that jointly localizes and matches fingerspelling segments to text.
Finally, I will describe a benchmark for large-vocabulary open-domain sign
language translation based on OpenASL. To address the challenges of sign
language translation in realistic settings, we propose a set of techniques
including sign search as a pretext task for pre-training and fusion of mouthing
and handshape features.
Related papers
- Scaling up Multimodal Pre-training for Sign Language Understanding [96.17753464544604]
Sign language serves as the primary meaning of communication for the deaf-mute community.
To facilitate communication between the deaf-mute and hearing people, a series of sign language understanding (SLU) tasks have been studied.
These tasks investigate sign language topics from diverse perspectives and raise challenges in learning effective representation of sign language videos.
arXiv Detail & Related papers (2024-08-16T06:04:25Z) - EvSign: Sign Language Recognition and Translation with Streaming Events [59.51655336911345]
Event camera could naturally perceive dynamic hand movements, providing rich manual clues for sign language tasks.
We propose efficient transformer-based framework for event-based SLR and SLT tasks.
Our method performs favorably against existing state-of-the-art approaches with only 0.34% computational cost.
arXiv Detail & Related papers (2024-07-17T14:16:35Z) - SignBERT+: Hand-model-aware Self-supervised Pre-training for Sign
Language Understanding [132.78015553111234]
Hand gesture serves as a crucial role during the expression of sign language.
Current deep learning based methods for sign language understanding (SLU) are prone to over-fitting due to insufficient sign data resource.
We propose the first self-supervised pre-trainable SignBERT+ framework with model-aware hand prior incorporated.
arXiv Detail & Related papers (2023-05-08T17:16:38Z) - Weakly-supervised Fingerspelling Recognition in British Sign Language
Videos [85.61513254261523]
Previous fingerspelling recognition methods have not focused on British Sign Language (BSL)
In contrast to previous methods, our method only uses weak annotations from subtitles for training.
We propose a Transformer architecture adapted to this task, with a multiple-hypothesis CTC loss function to learn from alternative annotation possibilities.
arXiv Detail & Related papers (2022-11-16T15:02:36Z) - Searching for fingerspelled content in American Sign Language [32.89182994277633]
Natural language processing for sign language video is crucial for making artificial intelligence technologies accessible to deaf individuals.
In this paper, we address the problem of searching for fingerspelled key-words or key phrases in raw sign language videos.
We propose an end-to-end model for this task, FSS-Net, that jointly detects fingerspelling and matches it to a text sequence.
arXiv Detail & Related papers (2022-03-24T18:36:22Z) - Sign Language Video Retrieval with Free-Form Textual Queries [19.29003565494735]
We introduce the task of sign language retrieval with free-form textual queries.
The objective is to find the signing video in the collection that best matches the written query.
We propose SPOT-ALIGN, a framework for interleaving iterative rounds of sign spotting and feature alignment to expand the scope and scale of available training data.
arXiv Detail & Related papers (2022-01-07T15:22:18Z) - A Fine-Grained Visual Attention Approach for Fingerspelling Recognition
in the Wild [17.8181080354116]
Automatic recognition of fingerspelling can help resolve communication barriers when interacting with deaf people.
Main challenges prevalent in fingerspelling recognition are the ambiguity in the gestures and strong articulation of the hands.
We propose a fine-grained visual attention mechanism using the Transformer model for the sequence-to-sequence prediction task in the wild dataset.
arXiv Detail & Related papers (2021-05-17T06:15:35Z) - Fingerspelling Detection in American Sign Language [32.79935314131377]
We consider the task of fingerspelling detection in raw, untrimmed sign language videos.
This is an important step towards building real-world fingerspelling recognition systems.
We propose a benchmark and a suite of evaluation metrics, some of which reflect the effect of detection on the downstream fingerspelling recognition task.
arXiv Detail & Related papers (2021-04-03T02:11:09Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Watch, read and lookup: learning to spot signs from multiple supervisors [99.50956498009094]
Given a video of an isolated sign, our task is to identify whether and where it has been signed in a continuous, co-articulated sign language video.
We train a model using multiple types of available supervision by: (1) watching existing sparsely labelled footage; (2) reading associated subtitles which provide additional weak-supervision; and (3) looking up words in visual sign language dictionaries.
These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning.
arXiv Detail & Related papers (2020-10-08T14:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.