SLGTformer: An Attention-Based Approach to Sign Language Recognition
- URL: http://arxiv.org/abs/2212.10746v2
- Date: Fri, 23 Dec 2022 02:30:57 GMT
- Title: SLGTformer: An Attention-Based Approach to Sign Language Recognition
- Authors: Neil Song, Yu Xiang
- Abstract summary: Sign language is difficult to learn and represents a significant barrier for those who are hard of hearing or unable to speak.
We propose a novel, attention-based approach to Sign Language Recognition built upon deconstructing temporal graph self-attention.
We demonstrate the effectiveness of SLformer on the World-Level American Sign Language (WLASL) dataset.
- Score: 19.786769414376323
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sign language is the preferred method of communication of deaf or mute
people, but similar to any language, it is difficult to learn and represents a
significant barrier for those who are hard of hearing or unable to speak. A
person's entire frontal appearance dictates and conveys specific meaning.
However, this frontal appearance can be quantified as a temporal sequence of
human body pose, leading to Sign Language Recognition through the learning of
spatiotemporal dynamics of skeleton keypoints. We propose a novel,
attention-based approach to Sign Language Recognition exclusively built upon
decoupled graph and temporal self-attention: the Sign Language Graph Time
Transformer (SLGTformer). SLGTformer first deconstructs spatiotemporal pose
sequences separately into spatial graphs and temporal windows. SLGTformer then
leverages novel Learnable Graph Relative Positional Encodings (LGRPE) to guide
spatial self-attention with the graph neighborhood context of the human
skeleton. By modeling the temporal dimension as intra- and inter-window
dynamics, we introduce Temporal Twin Self-Attention (TTSA) as the combination
of locally-grouped temporal attention (LTA) and global sub-sampled temporal
attention (GSTA). We demonstrate the effectiveness of SLGTformer on the
World-Level American Sign Language (WLASL) dataset, achieving state-of-the-art
performance with an ensemble-free approach on the keypoint modality. The code
is available at https://github.com/neilsong/slt
Related papers
- A Spatio-Temporal Representation Learning as an Alternative to Traditional Glosses in Sign Language Translation and Production [9.065171626657818]
This paper addresses the challenges associated with the use of glosses in Sign Language Translation (SLT) and Sign Language Production Language (SLP)
We introduce Universal Gloss-level Representation (UniGloR), a framework designed to capture thetemporal inherent sign language.
Our experiments in a keypoint-based setting demonstrate that UniGloR either outperforms or matches performance of previous SLT and SLP methods.
arXiv Detail & Related papers (2024-07-03T07:12:36Z) - Part-aware Unified Representation of Language and Skeleton for Zero-shot Action Recognition [57.97930719585095]
We introduce Part-aware Unified Representation between Language and Skeleton (PURLS) to explore visual-semantic alignment at both local and global scales.
Our approach is evaluated on various skeleton/language backbones and three large-scale datasets.
The results showcase the universality and superior performance of PURLS, surpassing prior skeleton-based solutions and standard baselines from other domains.
arXiv Detail & Related papers (2024-06-19T08:22:32Z) - Enhancing Brazilian Sign Language Recognition through Skeleton Image Representation [2.6311088262657907]
This work proposes an Isolated Sign Language Recognition (ISLR) approach where body, hands, and facial landmarks are extracted throughout time and encoded as 2-D images.
We show that our method surpassed the state-of-the-art in terms of performance metrics on two widely recognized datasets in Brazilian Sign Language (LIBRAS)
In addition to being more accurate, our method is more time-efficient and easier to train due to its reliance on a simpler network architecture and solely RGB data as input.
arXiv Detail & Related papers (2024-04-29T23:21:17Z) - Expedited Training of Visual Conditioned Language Generation via
Redundancy Reduction [61.16125290912494]
$textEVL_textGen$ is a framework designed for the pre-training of visually conditioned language generation models.
We show that our approach accelerates the training of vision-language models by a factor of 5 without a noticeable impact on overall performance.
arXiv Detail & Related papers (2023-10-05T03:40:06Z) - Unified Language-Vision Pretraining in LLM with Dynamic Discrete Visual Tokenization [52.935150075484074]
We introduce a well-designed visual tokenizer to translate the non-linguistic image into a sequence of discrete tokens like a foreign language.
The resulting visual tokens encompass high-level semantics worthy of a word and also support dynamic sequence length varying from the image.
This unification empowers LaVIT to serve as an impressive generalist interface to understand and generate multi-modal content simultaneously.
arXiv Detail & Related papers (2023-09-09T03:01:38Z) - Improving Continuous Sign Language Recognition with Cross-Lingual Signs [29.077175863743484]
We study the feasibility of utilizing multilingual sign language corpora to facilitate continuous sign language recognition.
We first build two sign language dictionaries containing isolated signs that appear in two datasets.
Then we identify the sign-to-sign mappings between two sign languages via a well-optimized isolated sign language recognition model.
arXiv Detail & Related papers (2023-08-21T15:58:47Z) - Learning Long-Term Spatial-Temporal Graphs for Active Speaker Detection [21.512786675773675]
Active speaker detection in videos with multiple speakers is a challenging task.
We present SPELL, a novel spatial-temporal graph learning framework.
SPELL is able to reason over long temporal contexts for all nodes without relying on computationally expensive fully connected graph neural networks.
arXiv Detail & Related papers (2022-07-15T23:43:17Z) - Signing at Scale: Learning to Co-Articulate Signs for Large-Scale
Photo-Realistic Sign Language Production [43.45785951443149]
Sign languages are visual languages, with vocabularies as rich as their spoken language counterparts.
Current deep-learning based Sign Language Production (SLP) models produce under-articulated skeleton pose sequences.
We tackle large-scale SLP by learning to co-articulate between dictionary signs.
We also propose SignGAN, a pose-conditioned human synthesis model that produces photo-realistic sign language videos.
arXiv Detail & Related papers (2022-03-29T08:51:38Z) - Sign Language Recognition via Skeleton-Aware Multi-Model Ensemble [71.97020373520922]
Sign language is commonly used by deaf or mute people to communicate.
We propose a novel Multi-modal Framework with a Global Ensemble Model (GEM) for isolated Sign Language Recognition ( SLR)
Our proposed SAM- SLR-v2 framework is exceedingly effective and achieves state-of-the-art performance with significant margins.
arXiv Detail & Related papers (2021-10-12T16:57:18Z) - Skeleton Based Sign Language Recognition Using Whole-body Keypoints [71.97020373520922]
Sign language is used by deaf or speech impaired people to communicate.
Skeleton-based recognition is becoming popular that it can be further ensembled with RGB-D based method to achieve state-of-the-art performance.
Inspired by the recent development of whole-body pose estimation citejin 2020whole, we propose recognizing sign language based on the whole-body key points and features.
arXiv Detail & Related papers (2021-03-16T03:38:17Z) - Pose-based Sign Language Recognition using GCN and BERT [0.0]
Word-level sign language recognition (WSLR) is the first important step towards understanding and interpreting sign language.
recognizing signs from videos is a challenging task as the meaning of a word depends on a combination of subtle body motions, hand configurations, and other movements.
Recent pose-based architectures for W SLR either model both the spatial and temporal dependencies among the poses in different frames simultaneously or only model the temporal information without fully utilizing the spatial information.
We tackle the problem of W SLR using a novel pose-based approach, which captures spatial and temporal information separately and performs late fusion.
arXiv Detail & Related papers (2020-12-01T19:10:50Z) - Vokenization: Improving Language Understanding with Contextualized,
Visual-Grounded Supervision [110.66085917826648]
We develop a technique that extrapolates multimodal alignments to language-only data by contextually mapping language tokens to their related images.
"vokenization" is trained on relatively small image captioning datasets and we then apply it to generate vokens for large language corpora.
Trained with these contextually generated vokens, our visually-supervised language models show consistent improvements over self-supervised alternatives on multiple pure-language tasks.
arXiv Detail & Related papers (2020-10-14T02:11:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.