token2vec: A Joint Self-Supervised Pre-training Framework Using Unpaired
Speech and Text
- URL: http://arxiv.org/abs/2210.16755v1
- Date: Sun, 30 Oct 2022 06:38:19 GMT
- Title: token2vec: A Joint Self-Supervised Pre-training Framework Using Unpaired
Speech and Text
- Authors: Xianghu Yue and Junyi Ao and Xiaoxue Gao and Haizhou Li
- Abstract summary: token2vec is a novel joint pre-training framework for unpaired speech and text based on discrete representations of speech.
Experiments show that token2vec is significantly superior to various speech-only pre-training baselines, with up to 17.7% relative WER reduction.
- Score: 65.04385919645395
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised pre-training has been successful in both text and speech
processing. Speech and text offer different but complementary information. The
question is whether we are able to perform a speech-text joint pre-training on
unpaired speech and text. In this paper, we take the idea of self-supervised
pre-training one step further and propose token2vec, a novel joint pre-training
framework for unpaired speech and text based on discrete representations of
speech. Firstly, due to the distinct characteristics between speech and text
modalities, where speech is continuous while text is discrete, we first
discretize speech into a sequence of discrete speech tokens to solve the
modality mismatch problem. Secondly, to solve the length mismatch problem,
where the speech sequence is usually much longer than text sequence, we convert
the words of text into phoneme sequences and randomly repeat each phoneme in
the sequences. Finally, we feed the discrete speech and text tokens into a
modality-agnostic Transformer encoder and pre-train with token-level masking
language modeling (tMLM). Experiments show that token2vec is significantly
superior to various speech-only pre-training baselines, with up to 17.7%
relative WER reduction. Token2vec model is also validated on a non-ASR task,
i.e., spoken intent classification, and shows good transferability.
Related papers
- MMSpeech: Multi-modal Multi-task Encoder-Decoder Pre-training for Speech
Recognition [75.12948999653338]
We propose a novel multi-task encoder-decoder pre-training framework (MMSpeech) for Mandarin automatic speech recognition (ASR)
We employ a multi-task learning framework including five self-supervised and supervised tasks with speech and text data.
Experiments on AISHELL-1 show that our proposed method achieves state-of-the-art performance, with a more than 40% relative improvement compared with other pre-training methods.
arXiv Detail & Related papers (2022-11-29T13:16:09Z) - Assessing Phrase Break of ESL speech with Pre-trained Language Models [6.635783609515407]
This work introduces an approach to assessing phrase break in ESL learners' speech with pre-trained language models (PLMs)
Different with traditional methods, this proposal converts speech to token sequences, and then leverages the power of PLMs.
arXiv Detail & Related papers (2022-10-28T10:06:06Z) - SpeechLM: Enhanced Speech Pre-Training with Unpaired Textual Data [100.46303484627045]
We propose a cross-modal Speech and Language Model (SpeechLM) to align speech and text pre-training with a pre-defined unified representation.
Specifically, we introduce two alternative discrete tokenizers to bridge the speech and text modalities.
We evaluate SpeechLM on various spoken language processing tasks including speech recognition, speech translation, and universal representation evaluation framework SUPERB.
arXiv Detail & Related papers (2022-09-30T09:12:10Z) - Unified Speech-Text Pre-training for Speech Translation and Recognition [113.31415771943162]
We describe a method to jointly pre-train speech and text in an encoder-decoder modeling framework for speech translation and recognition.
The proposed method incorporates four self-supervised and supervised subtasks for cross modality learning.
It achieves between 1.7 and 2.3 BLEU improvement above the state of the art on the MuST-C speech translation dataset.
arXiv Detail & Related papers (2022-04-11T20:59:51Z) - SLAM: A Unified Encoder for Speech and Language Modeling via Speech-Text
Joint Pre-Training [33.02912456062474]
We build a single encoder with the BERT objective on unlabeled text together with the w2v-BERT objective on unlabeled speech.
We demonstrate that incorporating both speech and text data during pre-training can significantly improve downstream quality on CoVoST2 speech translation.
arXiv Detail & Related papers (2021-10-20T00:59:36Z) - Zero-Shot Text-to-Speech for Text-Based Insertion in Audio Narration [62.75234183218897]
We propose a one-stage context-aware framework to generate natural and coherent target speech without any training data of the speaker.
We generate the mel-spectrogram of the edited speech with a transformer-based decoder.
It outperforms a recent zero-shot TTS engine by a large margin.
arXiv Detail & Related papers (2021-09-12T04:17:53Z) - Direct speech-to-speech translation with discrete units [64.19830539866072]
We present a direct speech-to-speech translation (S2ST) model that translates speech from one language to speech in another language without relying on intermediate text generation.
We propose to predict the self-supervised discrete representations learned from an unlabeled speech corpus instead.
When target text transcripts are available, we design a multitask learning framework with joint speech and text training that enables the model to generate dual mode output (speech and text) simultaneously in the same inference pass.
arXiv Detail & Related papers (2021-07-12T17:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.