Augmentation Invariant Discrete Representation for Generative Spoken
Language Modeling
- URL: http://arxiv.org/abs/2209.15483v2
- Date: Mon, 29 May 2023 10:50:29 GMT
- Title: Augmentation Invariant Discrete Representation for Generative Spoken
Language Modeling
- Authors: Itai Gat, Felix Kreuk, Tu Anh Nguyen, Ann Lee, Jade Copet, Gabriel
Synnaeve, Emmanuel Dupoux, Yossi Adi
- Abstract summary: We propose an effective and efficient method to learn robust discrete speech representation for generative spoken language modeling.
The proposed approach is based on applying a set of signal transformations to the speech signal and optimizing the model using an iterative pseudo-labeling scheme.
We additionally evaluate our method on the speech-to-speech translation task, considering Spanish-English and French-English translations, and show the proposed approach outperforms the evaluated baselines.
- Score: 41.733860809136196
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative Spoken Language Modeling research focuses on optimizing speech
Language Models (LMs) using raw audio recordings without accessing any textual
supervision. Such speech LMs usually operate over discrete units obtained from
quantizing internal representations of self-supervised models. Although such
units show impressive modeling results, their robustness capabilities have not
been extensively investigated. This work focuses on improving the robustness of
discrete input representations for generative spoken language modeling. First,
we formally define how to measure the robustness of such representations to
various signal variations that do not alter the spoken information (e.g.,
time-stretch). Next, we empirically demonstrate how current state-of-the-art
representation models lack robustness to such variations. To overcome this, we
propose an effective and efficient method to learn robust discrete speech
representation for generative spoken language modeling. The proposed approach
is based on applying a set of signal transformations to the speech signal and
optimizing the model using an iterative pseudo-labeling scheme. Our method
significantly improves over the evaluated baselines when considering encoding
and modeling metrics. We additionally evaluate our method on the
speech-to-speech translation task, considering Spanish-English and
French-English translations, and show the proposed approach outperforms the
evaluated baselines.
Related papers
- dMel: Speech Tokenization made Simple [19.169460770473908]
We show that discretizing mel-filterbank channels into discrete intensity bins produces a simple representation (dMel)
Our results demonstrate the effectiveness of dMel in achieving high performance on both tasks within a unified framework.
arXiv Detail & Related papers (2024-07-22T17:51:53Z) - SpeechGPT-Gen: Scaling Chain-of-Information Speech Generation [56.913182262166316]
Chain-of-Information Generation (CoIG) is a method for decoupling semantic and perceptual information in large-scale speech generation.
SpeechGPT-Gen is efficient in semantic and perceptual information modeling.
It markedly excels in zero-shot text-to-speech, zero-shot voice conversion, and speech-to-speech dialogue.
arXiv Detail & Related papers (2024-01-24T15:25:01Z) - Generative Spoken Language Model based on continuous word-sized audio
tokens [52.081868603603844]
We introduce a Generative Spoken Language Model based on word-size continuous-valued audio embeddings.
The resulting model is the first generative language model based on word-size continuous embeddings.
arXiv Detail & Related papers (2023-10-08T16:46:14Z) - Feature Normalization for Fine-tuning Self-Supervised Models in Speech
Enhancement [19.632358491434697]
Large, pre-trained representation models trained using self-supervised learning have gained popularity in various fields of machine learning.
In this paper, we investigate the feasibility of using pre-trained speech representation models for a downstream speech enhancement task.
Our proposed method enables significant improvements in speech quality compared to baselines when combined with various types of pre-trained speech models.
arXiv Detail & Related papers (2023-06-14T10:03:33Z) - Bidirectional Representations for Low Resource Spoken Language
Understanding [39.208462511430554]
We propose a representation model to encode speech in bidirectional rich encodings.
The approach uses a masked language modelling objective to learn the representations.
We show that the performance of the resulting encodings is better than comparable models on multiple datasets.
arXiv Detail & Related papers (2022-11-24T17:05:16Z) - Modeling Intensification for Sign Language Generation: A Computational
Approach [13.57903290481737]
End-to-end sign language generation models do not accurately represent the prosody in sign language.
We aim to improve the prosody in generated sign languages by modeling intensification in a data-driven manner.
We find that our efforts in intensification modeling yield better results when evaluated with automatic metrics.
arXiv Detail & Related papers (2022-03-18T01:13:21Z) - Towards Language Modelling in the Speech Domain Using Sub-word
Linguistic Units [56.52704348773307]
We propose a novel LSTM-based generative speech LM based on linguistic units including syllables and phonemes.
With a limited dataset, orders of magnitude smaller than that required by contemporary generative models, our model closely approximates babbling speech.
We show the effect of training with auxiliary text LMs, multitask learning objectives, and auxiliary articulatory features.
arXiv Detail & Related papers (2021-10-31T22:48:30Z) - SLM: Learning a Discourse Language Representation with Sentence
Unshuffling [53.42814722621715]
We introduce Sentence-level Language Modeling, a new pre-training objective for learning a discourse language representation.
We show that this feature of our model improves the performance of the original BERT by large margins.
arXiv Detail & Related papers (2020-10-30T13:33:41Z) - Grounded Compositional Outputs for Adaptive Language Modeling [59.02706635250856]
A language model's vocabulary$-$typically selected before training and permanently fixed later$-$affects its size.
We propose a fully compositional output embedding layer for language models.
To our knowledge, the result is the first word-level language model with a size that does not depend on the training vocabulary.
arXiv Detail & Related papers (2020-09-24T07:21:14Z) - Learning Spoken Language Representations with Neural Lattice Language
Modeling [39.50831917042577]
We propose a framework that trains neural lattice language models to provide contextualized representations for spoken language understanding tasks.
The proposed two-stage pre-training approach reduces the demands of speech data and has better efficiency.
arXiv Detail & Related papers (2020-07-06T10:38:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.