Masked Audio Generation using a Single Non-Autoregressive Transformer
- URL: http://arxiv.org/abs/2401.04577v2
- Date: Tue, 5 Mar 2024 09:12:35 GMT
- Title: Masked Audio Generation using a Single Non-Autoregressive Transformer
- Authors: Alon Ziv, Itai Gat, Gael Le Lan, Tal Remez, Felix Kreuk, Alexandre
D\'efossez, Jade Copet, Gabriel Synnaeve, Yossi Adi
- Abstract summary: MAGNeT is a masked generative sequence modeling method that operates directly over several streams of audio tokens.
We demonstrate the efficiency of MAGNeT for the task of text-to-music and text-to-audio generation.
We shed light on the importance of each of the components comprising MAGNeT, together with pointing to the trade-offs between autoregressive and non-autoregressive modeling.
- Score: 90.11646612273965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce MAGNeT, a masked generative sequence modeling method that
operates directly over several streams of audio tokens. Unlike prior work,
MAGNeT is comprised of a single-stage, non-autoregressive transformer. During
training, we predict spans of masked tokens obtained from a masking scheduler,
while during inference we gradually construct the output sequence using several
decoding steps. To further enhance the quality of the generated audio, we
introduce a novel rescoring method in which, we leverage an external
pre-trained model to rescore and rank predictions from MAGNeT, which will be
then used for later decoding steps. Lastly, we explore a hybrid version of
MAGNeT, in which we fuse between autoregressive and non-autoregressive models
to generate the first few seconds in an autoregressive manner while the rest of
the sequence is being decoded in parallel. We demonstrate the efficiency of
MAGNeT for the task of text-to-music and text-to-audio generation and conduct
an extensive empirical evaluation, considering both objective metrics and human
studies. The proposed approach is comparable to the evaluated baselines, while
being significantly faster (x7 faster than the autoregressive baseline).
Through ablation studies and analysis, we shed light on the importance of each
of the components comprising MAGNeT, together with pointing to the trade-offs
between autoregressive and non-autoregressive modeling, considering latency,
throughput, and generation quality. Samples are available on our demo page
https://pages.cs.huji.ac.il/adiyoss-lab/MAGNeT.
Related papers
- Autoregressive Speech Synthesis without Vector Quantization [135.4776759536272]
We present MELLE, a novel continuous-valued tokens based language modeling approach for text to speech synthesis (TTS)
MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition.
arXiv Detail & Related papers (2024-07-11T14:36:53Z) - Autoregressive Diffusion Transformer for Text-to-Speech Synthesis [39.32761051774537]
We propose encoding audio as vector sequences in continuous space $mathbb Rd$ and autoregressively generating these sequences.
High-bitrate continuous speech representation enables almost flawless reconstruction, allowing our model to achieve nearly perfect speech editing.
arXiv Detail & Related papers (2024-06-08T18:57:13Z) - Parallel Decoding via Hidden Transfer for Lossless Large Language Model Acceleration [54.897493351694195]
We propose a novel parallel decoding approach, namely textithidden transfer, which decodes multiple successive tokens simultaneously in a single forward pass.
In terms of acceleration metrics, we outperform all the single-model acceleration techniques, including Medusa and Self-Speculative decoding.
arXiv Detail & Related papers (2024-04-18T09:17:06Z) - Sequential Transfer Learning to Decode Heard and Imagined Timbre from
fMRI Data [0.0]
We present a sequential transfer learning framework for transformers on functional Magnetic Resonance Imaging (fMRI) data.
In the first phase, we pre-train our stacked-encoder transformer architecture on Next Thought Prediction.
In the second phase, we fine-tune the models and train additional fresh models on the supervised task of predicting whether or not two sequences of fMRI data were recorded while listening to the same musical timbre.
arXiv Detail & Related papers (2023-05-22T16:58:26Z) - Latent Autoregressive Source Separation [5.871054749661012]
This paper introduces vector-quantized Latent Autoregressive Source Separation (i.e., de-mixing an input signal into its constituent sources) without requiring additional gradient-based optimization or modifications of existing models.
Our separation method relies on the Bayesian formulation in which the autoregressive models are the priors, and a discrete (non-parametric) likelihood function is constructed by performing frequency counts over latent sums of addend tokens.
arXiv Detail & Related papers (2023-01-09T17:32:00Z) - Masked Autoencoding for Scalable and Generalizable Decision Making [93.84855114717062]
MaskDP is a simple and scalable self-supervised pretraining method for reinforcement learning and behavioral cloning.
We find that a MaskDP model gains the capability of zero-shot transfer to new BC tasks, such as single and multiple goal reaching.
arXiv Detail & Related papers (2022-11-23T07:04:41Z) - A Self-Paced Mixed Distillation Method for Non-Autoregressive Generation [135.84684279852098]
Non-Autoregressive (NAR) models significantly under-perform Auto-regressive (AR) models on various language generation tasks.
Among the NAR models, BANG is the first large-scale pre-training model on English un-labeled raw text corpus.
We propose a novel self-paced mixed distillation method to further improve the generation quality of BANG.
arXiv Detail & Related papers (2022-05-23T09:54:53Z) - MAE-AST: Masked Autoencoding Audio Spectrogram Transformer [11.814012909512307]
We propose a simple yet powerful improvement over the recent Self-Supervised Audio Spectrogram Transformer (SSAST) model for speech and audio classification.
We leverage the insight that the SSAST uses a very high masking ratio (75%) during pretraining, meaning that the vast majority of self-attention compute is performed on mask tokens.
We find that MAE-like pretraining can provide a 3x speedup and 2x memory usage reduction over the vanilla SSAST.
arXiv Detail & Related papers (2022-03-30T22:06:13Z) - Cross-Thought for Sentence Encoder Pre-training [89.32270059777025]
Cross-Thought is a novel approach to pre-training sequence encoder.
We train a Transformer-based sequence encoder over a large set of short sequences.
Experiments on question answering and textual entailment tasks demonstrate that our pre-trained encoder can outperform state-of-the-art encoders.
arXiv Detail & Related papers (2020-10-07T21:02:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.