Diffsound: Discrete Diffusion Model for Text-to-sound Generation
- URL: http://arxiv.org/abs/2207.09983v2
- Date: Fri, 28 Apr 2023 07:45:43 GMT
- Title: Diffsound: Discrete Diffusion Model for Text-to-sound Generation
- Authors: Dongchao Yang, Jianwei Yu, Helin Wang, Wen Wang, Chao Weng, Yuexian
Zou, and Dong Yu
- Abstract summary: We propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder.
The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform.
- Score: 78.4128796899781
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generating sound effects that humans want is an important topic. However,
there are few studies in this area for sound generation. In this study, we
investigate generating sound conditioned on a text prompt and propose a novel
text-to-sound generation framework that consists of a text encoder, a Vector
Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder. The
framework first uses the decoder to transfer the text features extracted from
the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the
vocoder is used to transform the generated mel-spectrogram into a waveform. We
found that the decoder significantly influences the generation performance.
Thus, we focus on designing a good decoder in this study. We begin with the
traditional autoregressive decoder, which has been proved as a state-of-the-art
method in previous sound generation works. However, the AR decoder always
predicts the mel-spectrogram tokens one by one in order, which introduces the
unidirectional bias and accumulation of errors problems. Moreover, with the AR
decoder, the sound generation time increases linearly with the sound duration.
To overcome the shortcomings introduced by AR decoders, we propose a
non-autoregressive decoder based on the discrete diffusion model, named
Diffsound. Specifically, the Diffsound predicts all of the mel-spectrogram
tokens in one step and then refines the predicted tokens in the next step, so
the best-predicted results can be obtained after several steps. Our experiments
show that our proposed Diffsound not only produces better text-to-sound
generation results when compared with the AR decoder but also has a faster
generation speed, e.g., MOS: 3.56 \textit{v.s} 2.786, and the generation speed
is five times faster than the AR decoder.
Related papers
- Hold Me Tight: Stable Encoder-Decoder Design for Speech Enhancement [1.4037575966075835]
1-D filters on raw audio are hard to train and often suffer from instabilities.
We address these problems with hybrid solutions, combining theory-driven and data-driven approaches.
arXiv Detail & Related papers (2024-08-30T15:49:31Z) - Faster Diffusion: Rethinking the Role of the Encoder for Diffusion Model Inference [95.42299246592756]
We study the UNet encoder and empirically analyze the encoder features.
We find that encoder features change minimally, whereas the decoder features exhibit substantial variations across different time-steps.
We validate our approach on other tasks: text-to-video, personalized generation and reference-guided generation.
arXiv Detail & Related papers (2023-12-15T08:46:43Z) - Text-Driven Foley Sound Generation With Latent Diffusion Model [33.4636070590045]
Foley sound generation aims to synthesise the background sound for multimedia content.
We propose a diffusion model based system for Foley sound generation with text conditions.
arXiv Detail & Related papers (2023-06-17T14:16:24Z) - Decoder-Only or Encoder-Decoder? Interpreting Language Model as a
Regularized Encoder-Decoder [75.03283861464365]
The seq2seq task aims at generating the target sequence based on the given input source sequence.
Traditionally, most of the seq2seq task is resolved by an encoder to encode the source sequence and a decoder to generate the target text.
Recently, a bunch of new approaches have emerged that apply decoder-only language models directly to the seq2seq task.
arXiv Detail & Related papers (2023-04-08T15:44:29Z) - Masked Autoencoders that Listen [79.99280830830854]
This paper studies a simple extension of image-based Masked Autoencoders (MAE) to self-supervised representation learning from audio spectrograms.
Following the Transformer encoder-decoder design in MAE, our Audio-MAE first encodes audio spectrogram patches with a high masking ratio, feeding only the non-masked tokens through encoder layers.
The decoder then re-orders and decodes the encoded context padded with mask tokens, in order to reconstruct the input spectrogram.
arXiv Detail & Related papers (2022-07-13T17:59:55Z) - Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired
Speech Data [145.95460945321253]
We introduce two pre-training tasks for the encoder-decoder network using acoustic units, i.e., pseudo codes.
The proposed Speech2C can relatively reduce the word error rate (WER) by 19.2% over the method without decoder pre-training.
arXiv Detail & Related papers (2022-03-31T15:33:56Z) - A Streamwise GAN Vocoder for Wideband Speech Coding at Very Low Bit Rate [8.312162364318235]
We present a GAN vocoder which is able to generate wideband speech waveforms from parameters coded at 1.6 kbit/s.
The proposed model is a modified version of the StyleMelGAN vocoder that can run in frame-by-frame manner.
arXiv Detail & Related papers (2021-08-09T14:03:07Z) - On Sparsifying Encoder Outputs in Sequence-to-Sequence Models [90.58793284654692]
We take Transformer as the testbed and introduce a layer of gates in-between the encoder and the decoder.
The gates are regularized using the expected value of the sparsity-inducing L0penalty.
We investigate the effects of this sparsification on two machine translation and two summarization tasks.
arXiv Detail & Related papers (2020-04-24T16:57:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.