Back Translation for Speech-to-text Translation Without Transcripts
- URL: http://arxiv.org/abs/2305.08709v1
- Date: Mon, 15 May 2023 15:12:40 GMT
- Title: Back Translation for Speech-to-text Translation Without Transcripts
- Authors: Qingkai Fang, Yang Feng
- Abstract summary: We develop a back translation algorithm for ST (BT4ST) to synthesize pseudo ST data from monolingual target data.
To ease the challenges posed by short-to-long generation and one-to-many mapping, we introduce self-supervised discrete units.
With our synthetic ST data, we achieve an average boost of 2.3 BLEU on MuST-C En-De, En-Fr, and En-Es datasets.
- Score: 11.13240570688547
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The success of end-to-end speech-to-text translation (ST) is often achieved
by utilizing source transcripts, e.g., by pre-training with automatic speech
recognition (ASR) and machine translation (MT) tasks, or by introducing
additional ASR and MT data. Unfortunately, transcripts are only sometimes
available since numerous unwritten languages exist worldwide. In this paper, we
aim to utilize large amounts of target-side monolingual data to enhance ST
without transcripts. Motivated by the remarkable success of back translation in
MT, we develop a back translation algorithm for ST (BT4ST) to synthesize pseudo
ST data from monolingual target data. To ease the challenges posed by
short-to-long generation and one-to-many mapping, we introduce self-supervised
discrete units and achieve back translation by cascading a target-to-unit model
and a unit-to-speech model. With our synthetic ST data, we achieve an average
boost of 2.3 BLEU on MuST-C En-De, En-Fr, and En-Es datasets. More experiments
show that our method is especially effective in low-resource scenarios.
Related papers
- Pushing the Limits of Zero-shot End-to-End Speech Translation [15.725310520335785]
Data scarcity and the modality gap between the speech and text modalities are two major obstacles of end-to-end Speech Translation (ST) systems.
We introduce ZeroSwot, a method for zero-shot ST that bridges the modality gap without any paired ST data.
Our experiments show that we can effectively close the modality gap without ST data, while our results on MuST-C and CoVoST demonstrate our method's superiority.
arXiv Detail & Related papers (2024-02-16T03:06:37Z) - End-to-End Speech-to-Text Translation: A Survey [0.0]
Speech-to-text translation pertains to the task of converting speech signals in a language to text in another language.
Automatic Speech Recognition (ASR), as well as Machine Translation(MT) models, play crucial roles in traditional ST translation.
arXiv Detail & Related papers (2023-12-02T07:40:32Z) - DUB: Discrete Unit Back-translation for Speech Translation [32.74997208667928]
We propose Discrete Unit Back-translation (DUB) to answer two questions: Is it better to represent speech with discrete units than with continuous features in direct ST?
With DUB, the back-translation technique can successfully be applied on direct ST and obtains an average boost of 5.5 BLEU on MuST-C En-De/Fr/Es.
In the low-resource language scenario, our method achieves comparable performance to existing methods that rely on large-scale external data.
arXiv Detail & Related papers (2023-05-19T03:48:16Z) - Speech-to-Speech Translation For A Real-world Unwritten Language [62.414304258701804]
We study speech-to-speech translation (S2ST) that translates speech from one language into another language.
We present an end-to-end solution from training data collection, modeling choices to benchmark dataset release.
arXiv Detail & Related papers (2022-11-11T20:21:38Z) - Simple and Effective Unsupervised Speech Translation [68.25022245914363]
We study a simple and effective approach to build speech translation systems without labeled data.
We present an unsupervised domain adaptation technique for pre-trained speech models.
Experiments show that unsupervised speech-to-text translation outperforms the previous unsupervised state of the art.
arXiv Detail & Related papers (2022-10-18T22:26:13Z) - Discrete Cross-Modal Alignment Enables Zero-Shot Speech Translation [71.35243644890537]
End-to-end Speech Translation (ST) aims at translating the source language speech into target language text without generating the intermediate transcriptions.
Existing zero-shot methods fail to align the two modalities of speech and text into a shared semantic space.
We propose a novel Discrete Cross-Modal Alignment (DCMA) method that employs a shared discrete vocabulary space to accommodate and match both modalities of speech and text.
arXiv Detail & Related papers (2022-10-18T03:06:47Z) - Enhanced Direct Speech-to-Speech Translation Using Self-supervised
Pre-training and Data Augmentation [76.13334392868208]
Direct speech-to-speech translation (S2ST) models suffer from data scarcity issues.
In this work, we explore self-supervised pre-training with unlabeled speech data and data augmentation to tackle this issue.
arXiv Detail & Related papers (2022-04-06T17:59:22Z) - Tackling data scarcity in speech translation using zero-shot
multilingual machine translation techniques [12.968557512440759]
Several techniques have been proposed for zero-shot translation.
We investigate whether these ideas can be applied to speech translation, by building ST models trained on speech transcription and text translation data.
The techniques were successfully applied to few-shot ST using limited ST data, with improvements of up to +12.9 BLEU points compared to direct end-to-end ST and +3.1 BLEU points compared to ST models fine-tuned from ASR model.
arXiv Detail & Related papers (2022-01-26T20:20:59Z) - Zero-shot Speech Translation [0.0]
Speech Translation (ST) is the task of translating speech in one language into text in another language.
End-to-end approaches use only one system to avoid propagating error, yet are difficult to employ due to data scarcity.
We explore zero-shot translation, which enables translating a pair of languages that is unseen during training.
arXiv Detail & Related papers (2021-07-13T12:00:44Z) - Consecutive Decoding for Speech-to-text Translation [51.155661276936044]
COnSecutive Transcription and Translation (COSTT) is an integral approach for speech-to-text translation.
The key idea is to generate source transcript and target translation text with a single decoder.
Our method is verified on three mainstream datasets.
arXiv Detail & Related papers (2020-09-21T10:10:45Z) - Improving Cross-Lingual Transfer Learning for End-to-End Speech
Recognition with Speech Translation [63.16500026845157]
We introduce speech-to-text translation as an auxiliary task to incorporate additional knowledge of the target language.
We show that training ST with human translations is not necessary.
Even with pseudo-labels from low-resource MT (200K examples), ST-enhanced transfer brings up to 8.9% WER reduction to direct transfer.
arXiv Detail & Related papers (2020-06-09T19:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.