Better Sign Language Translation with STMC-Transformer
- URL: http://arxiv.org/abs/2004.00588v2
- Date: Tue, 3 Nov 2020 00:59:54 GMT
- Title: Better Sign Language Translation with STMC-Transformer
- Authors: Kayo Yin and Jesse Read
- Abstract summary: Sign Language Translation first uses a Sign Language Recognition system to extract sign language glosses from videos.
A translation system then generates spoken language translations from the sign language glosses.
This paper introduces the STMC-Transformer which improves on the current state-of-the-art by over 5 and 7 BLEU respectively.
- Score: 9.835743237370218
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sign Language Translation (SLT) first uses a Sign Language Recognition (SLR)
system to extract sign language glosses from videos. Then, a translation system
generates spoken language translations from the sign language glosses. This
paper focuses on the translation system and introduces the STMC-Transformer
which improves on the current state-of-the-art by over 5 and 7 BLEU
respectively on gloss-to-text and video-to-text translation of the
PHOENIX-Weather 2014T dataset. On the ASLG-PC12 corpus, we report an increase
of over 16 BLEU.
We also demonstrate the problem in current methods that rely on gloss
supervision. The video-to-text translation of our STMC-Transformer outperforms
translation of GT glosses. This contradicts previous claims that GT gloss
translation acts as an upper bound for SLT performance and reveals that glosses
are an inefficient representation of sign language. For future SLT research, we
therefore suggest an end-to-end training of the recognition and translation
models, or using a different sign language annotation scheme.
Related papers
- Gloss2Text: Sign Language Gloss translation using LLMs and Semantically Aware Label Smoothing [21.183453511034767]
We propose several advances by leveraging pre-trained large language models (LLMs), data augmentation, and novel label-smoothing loss function.
Our approach surpasses state-of-the-art performance in em Gloss2Text translation.
arXiv Detail & Related papers (2024-07-01T15:46:45Z) - VK-G2T: Vision and Context Knowledge enhanced Gloss2Text [60.57628465740138]
Existing sign language translation methods follow a two-stage pipeline: first converting the sign language video to a gloss sequence (i.e. Sign2Gloss) and then translating the generated gloss sequence into a spoken language sentence (i.e. Gloss2Text)
We propose a vision and context knowledge enhanced Gloss2Text model, named VK-G2T, which leverages the visual content of the sign language video to learn the properties of the target sentence and exploit the context knowledge to facilitate the adaptive translation of gloss words.
arXiv Detail & Related papers (2023-12-15T21:09:34Z) - Gloss-free Sign Language Translation: Improving from Visual-Language
Pretraining [56.26550923909137]
Gloss-Free Sign Language Translation (SLT) is a challenging task due to its cross-domain nature.
We propose a novel Gloss-Free SLT based on Visual-Language Pretraining (GFSLT-)
Our approach involves two stages: (i) integrating Contrastive Language-Image Pre-training with masked self-supervised learning to create pre-tasks that bridge the semantic gap between visual and textual representations and restore masked sentences, and (ii) constructing an end-to-end architecture with an encoder-decoder-like structure that inherits the parameters of the pre-trained Visual and Text Decoder from
arXiv Detail & Related papers (2023-07-27T10:59:18Z) - Changing the Representation: Examining Language Representation for
Neural Sign Language Production [43.45785951443149]
We apply Natural Language Processing techniques to the first step of the Neural Sign Language Production pipeline.
We use language models such as BERT and Word2Vec to create better sentence level embeddings.
We introduce Text to HamNoSys (T2H) translation, and show the advantages of using a phonetic representation for sign language translation.
arXiv Detail & Related papers (2022-09-16T12:45:29Z) - Explore More Guidance: A Task-aware Instruction Network for Sign
Language Translation Enhanced with Data Augmentation [20.125265661134964]
Sign language recognition and translation first uses a recognition module to generate glosses from sign language videos.
In this work, we propose a task-aware instruction network, namely TIN-SLT, for sign language translation.
arXiv Detail & Related papers (2022-04-12T17:09:44Z) - SimulSLT: End-to-End Simultaneous Sign Language Translation [55.54237194555432]
Existing sign language translation methods need to read all the videos before starting the translation.
We propose SimulSLT, the first end-to-end simultaneous sign language translation model.
SimulSLT achieves BLEU scores that exceed the latest end-to-end non-simultaneous sign language translation model.
arXiv Detail & Related papers (2021-12-08T11:04:52Z) - Improving Sign Language Translation with Monolingual Data by Sign
Back-Translation [105.83166521438463]
We propose a sign back-translation (SignBT) approach, which incorporates massive spoken language texts into sign training.
With a text-to-gloss translation model, we first back-translate the monolingual text to its gloss sequence.
Then, the paired sign sequence is generated by splicing pieces from an estimated gloss-to-sign bank at the feature level.
arXiv Detail & Related papers (2021-05-26T08:49:30Z) - Data Augmentation for Sign Language Gloss Translation [115.13684506803529]
Sign language translation (SLT) is often decomposed into video-to-gloss recognition and gloss-totext translation.
We focus here on gloss-to-text translation, which we treat as a low-resource neural machine translation (NMT) problem.
By pre-training on the thus obtained synthetic data, we improve translation from American Sign Language (ASL) to English and German Sign Language (DGS) to German by up to 3.14 and 2.20 BLEU, respectively.
arXiv Detail & Related papers (2021-05-16T16:37:36Z) - Sign Language Transformers: Joint End-to-end Sign Language Recognition
and Translation [59.38247587308604]
We introduce a novel transformer based architecture that jointly learns Continuous Sign Language Recognition and Translation.
We evaluate the recognition and translation performances of our approaches on the challenging RWTH-PHOENIX-Weather-2014T dataset.
Our translation networks outperform both sign video to spoken language and gloss to spoken language translation models.
arXiv Detail & Related papers (2020-03-30T21:35:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.