MMSpeech: Multi-modal Multi-task Encoder-Decoder Pre-training for Speech
Recognition
- URL: http://arxiv.org/abs/2212.00500v1
- Date: Tue, 29 Nov 2022 13:16:09 GMT
- Title: MMSpeech: Multi-modal Multi-task Encoder-Decoder Pre-training for Speech
Recognition
- Authors: Xiaohuan Zhou, Jiaming Wang, Zeyu Cui, Shiliang Zhang, Zhijie Yan,
Jingren Zhou, Chang Zhou
- Abstract summary: We propose a novel multi-task encoder-decoder pre-training framework (MMSpeech) for Mandarin automatic speech recognition (ASR)
We employ a multi-task learning framework including five self-supervised and supervised tasks with speech and text data.
Experiments on AISHELL-1 show that our proposed method achieves state-of-the-art performance, with a more than 40% relative improvement compared with other pre-training methods.
- Score: 75.12948999653338
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a novel multi-modal multi-task encoder-decoder
pre-training framework (MMSpeech) for Mandarin automatic speech recognition
(ASR), which employs both unlabeled speech and text data. The main difficulty
in speech-text joint pre-training comes from the significant difference between
speech and text modalities, especially for Mandarin speech and text. Unlike
English and other languages with an alphabetic writing system, Mandarin uses an
ideographic writing system where character and sound are not tightly mapped to
one another. Therefore, we propose to introduce the phoneme modality into
pre-training, which can help capture modality-invariant information between
Mandarin speech and text. Specifically, we employ a multi-task learning
framework including five self-supervised and supervised tasks with speech and
text data. For end-to-end pre-training, we introduce self-supervised
speech-to-pseudo-codes (S2C) and phoneme-to-text (P2T) tasks utilizing
unlabeled speech and text data, where speech-pseudo-codes pairs and
phoneme-text pairs are a supplement to the supervised speech-text pairs. To
train the encoder to learn better speech representation, we introduce
self-supervised masked speech prediction (MSP) and supervised phoneme
prediction (PP) tasks to learn to map speech into phonemes. Besides, we
directly add the downstream supervised speech-to-text (S2T) task into the
pre-training process, which can further improve the pre-training performance
and achieve better recognition results even without fine-tuning. Experiments on
AISHELL-1 show that our proposed method achieves state-of-the-art performance,
with a more than 40% relative improvement compared with other pre-training
methods.
Related papers
- Textless Unit-to-Unit training for Many-to-Many Multilingual Speech-to-Speech Translation [65.13824257448564]
This paper proposes a textless training method for many-to-many multilingual speech-to-speech translation.
By treating the speech units as pseudo-text, we can focus on the linguistic content of the speech.
We demonstrate that the proposed UTUT model can be effectively utilized not only for Speech-to-Speech Translation (S2ST) but also for multilingual Text-to-Speech Synthesis (T2S) and Text-to-Speech Translation (T2ST)
arXiv Detail & Related papers (2023-08-03T15:47:04Z) - token2vec: A Joint Self-Supervised Pre-training Framework Using Unpaired
Speech and Text [65.04385919645395]
token2vec is a novel joint pre-training framework for unpaired speech and text based on discrete representations of speech.
Experiments show that token2vec is significantly superior to various speech-only pre-training baselines, with up to 17.7% relative WER reduction.
arXiv Detail & Related papers (2022-10-30T06:38:19Z) - SpeechUT: Bridging Speech and Text with Hidden-Unit for Encoder-Decoder
Based Speech-Text Pre-training [106.34112664893622]
We propose a unified-modal speech-unit-text pre-training model, SpeechUT, to connect the representations of a speech encoder and a text decoder with a shared unit encoder.
Our proposed SpeechUT is fine-tuned and evaluated on automatic speech recognition (ASR) and speech translation (ST) tasks.
arXiv Detail & Related papers (2022-10-07T17:57:45Z) - SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data [100.46303484627045]
We propose a cross-modal Speech and Language Model (SpeechLM) to align speech and text pre-training with a pre-defined unified representation.
Specifically, we introduce two alternative discrete tokenizers to bridge the speech and text modalities.
We evaluate SpeechLM on various spoken language processing tasks including speech recognition, speech translation, and universal representation evaluation framework SUPERB.
arXiv Detail & Related papers (2022-09-30T09:12:10Z) - Unified Speech-Text Pre-training for Speech Translation and Recognition [113.31415771943162]
We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition.
The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning.
It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MuST-C speech translation dataset.
arXiv Detail & Related papers (2022-04-11T20:59:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.