SpiRit-LM: Interleaved Spoken and Written Language Model
- URL: http://arxiv.org/abs/2402.05755v1
- Date: Thu, 8 Feb 2024 15:39:32 GMT
- Title: SpiRit-LM: Interleaved Spoken and Written Language Model
- Authors: Tu Anh Nguyen, Benjamin Muller, Bokai Yu, Marta R. Costa-jussa, Maha
Elbayad, Sravya Popuri, Paul-Ambroise Duquenne, Robin Algayres, Ruslan
Mavlyutov, Itai Gat, Gabriel Synnaeve, Juan Pino, Benoit Sagot, Emmanuel
Dupoux
- Abstract summary: SPIRIT-LM is a foundation multimodal language model that freely mixes text and speech.
Model is based on a pretrained text language model that we extend to the speech modality by continuously training it on text and speech units.
- Score: 45.44798658207754
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce SPIRIT-LM, a foundation multimodal language model that freely
mixes text and speech. Our model is based on a pretrained text language model
that we extend to the speech modality by continuously training it on text and
speech units. Speech and text sequences are concatenated as a single set of
tokens, and trained with a word-level interleaving method using a small
automatically-curated speech-text parallel corpus. SPIRIT-LM comes in two
versions: a BASE version that uses speech semantic units and an EXPRESSIVE
version that models expressivity using pitch and style units in addition to the
semantic units. For both versions, the text is encoded with subword BPE tokens.
The resulting model displays both the semantic abilities of text models and the
expressive abilities of speech models. Additionally, we demonstrate that
SPIRIT-LM is able to learn new tasks in a few-shot fashion across modalities
(i.e. ASR, TTS, Speech Classification).
Related papers
- Leveraging Unit Language Guidance to Advance Speech Modeling in Textless Speech-to-Speech Translation [48.769137497536]
We propose the unit language to overcome the two modeling challenges.<n>The unit language can be considered a text-like representation format.<n>We implement multi-task learning to utilize the unit language in guiding the speech modeling process.
arXiv Detail & Related papers (2025-05-21T10:05:25Z) - Toward Joint Language Modeling for Speech Units and Text [89.32163954508489]
We explore joint language modeling for speech units and text.
We introduce automatic metrics to evaluate how well the joint LM mixes speech and text.
Our results show that by mixing speech units and text with our proposed mixing techniques, the joint LM improves over a speech-only baseline on SLU tasks.
arXiv Detail & Related papers (2023-10-12T20:53:39Z) - Textless Unit-to-Unit training for Many-to-Many Multilingual Speech-to-Speech Translation [65.13824257448564]
This paper proposes a textless training method for many-to-many multilingual speech-to-speech translation.
By treating the speech units as pseudo-text, we can focus on the linguistic content of the speech.
We demonstrate that the proposed UTUT model can be effectively utilized not only for Speech-to-Speech Translation (S2ST) but also for multilingual Text-to-Speech Synthesis (T2S) and Text-to-Speech Translation (T2ST)
arXiv Detail & Related papers (2023-08-03T15:47:04Z) - AudioPaLM: A Large Language Model That Can Speak and Listen [79.44757696533709]
We introduce AudioPaLM, a large language model for speech understanding and generation.
AudioPaLM fuses text-based and speech-based language models.
It can process and generate text and speech with applications including speech recognition and speech-to-speech translation.
arXiv Detail & Related papers (2023-06-22T14:37:54Z) - WAVPROMPT: Towards Few-Shot Spoken Language Understanding with Frozen
Language Models [57.557319372969495]
Large-scale auto-regressive language models pretrained on massive text have demonstrated their impressive ability to perform new natural language tasks.
Recent studies further show that such a few-shot learning ability can be extended to the text-image setting by training an encoder to encode the images into embeddings.
We propose a novel speech understanding framework, WavPrompt, where we finetune a wav2vec model to generate a sequence of audio embeddings understood by the language model.
arXiv Detail & Related papers (2022-03-29T19:08:55Z) - Towards Language Modelling in the Speech Domain Using Sub-word
Linguistic Units [56.52704348773307]
We propose a novel LSTM-based generative speech LM based on linguistic units including syllables and phonemes.
With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech.
We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features.
arXiv Detail & Related papers (2021-10-31T22:48:30Z) - Text-Free Prosody-Aware Generative Spoken Language Modeling [46.19240899818964]
We present a prosody-aware generative spoken language model (pGSLM)
It is composed of a multi-stream transformer language model (MS-TLM) of speech, represented as discovered unit and prosodic feature streams, and an adapted HiFi-GAN model converting MS-TLM outputs to waveforms.
Experimental results show that the pGSLM can utilize prosody to improve both prosody and content modeling, and also generate natural, meaningful, and coherent speech given a spoken prompt.
arXiv Detail & Related papers (2021-09-07T18:03:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.