SoundStorm: Efficient Parallel Audio Generation
- URL: http://arxiv.org/abs/2305.09636v1
- Date: Tue, 16 May 2023 17:41:25 GMT
- Title: SoundStorm: Efficient Parallel Audio Generation
- Authors: Zal\'an Borsos, Matt Sharifi, Damien Vincent, Eugene Kharitonov, Neil
Zeghidour, Marco Tagliasacchi
- Abstract summary: We present SoundStorm, a model for efficient, non-autoregressive audio generation.
SoundStorm receives as semantic tokens of AudioLM, and relies on bidirectional attention and confidence-based parallel decoding.
We demonstrate the ability of our model to scale audio generation to longer sequences by synthesizing high-quality, natural dialogue segments.
- Score: 27.121920017380273
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present SoundStorm, a model for efficient, non-autoregressive audio
generation. SoundStorm receives as input the semantic tokens of AudioLM, and
relies on bidirectional attention and confidence-based parallel decoding to
generate the tokens of a neural audio codec. Compared to the autoregressive
generation approach of AudioLM, our model produces audio of the same quality
and with higher consistency in voice and acoustic conditions, while being two
orders of magnitude faster. SoundStorm generates 30 seconds of audio in 0.5
seconds on a TPU-v4. We demonstrate the ability of our model to scale audio
generation to longer sequences by synthesizing high-quality, natural dialogue
segments, given a transcript annotated with speaker turns and a short prompt
with the speakers' voices.
Related papers
- Low Frame-rate Speech Codec: a Codec Designed for Fast High-quality Speech LLM Training and Inference [10.909997817643905]
We present the Low Frame-rate Speech Codec (LFSC): a neural audio that leverages a finite scalar quantization and adversarial training with large speech language models to achieve high-quality audio compression with a 1.89 kbps and 21.5 frames per second.
We demonstrate that our novel LLM can make the inference of text-to-speech models around three times faster while improving intelligibility and producing quality comparable to previous models.
arXiv Detail & Related papers (2024-09-18T16:39:10Z) - Frieren: Efficient Video-to-Audio Generation Network with Rectified Flow Matching [51.70360630470263]
Video-to-audio (V2A) generation aims to synthesize content-matching audio from silent video.
We propose Frieren, a V2A model based on rectified flow matching.
Experiments indicate that Frieren achieves state-of-the-art performance in both generation quality and temporal alignment.
arXiv Detail & Related papers (2024-06-01T06:40:22Z) - C3LLM: Conditional Multimodal Content Generation Using Large Language Models [66.11184017840688]
We introduce C3LLM, a novel framework combining three tasks of video-to-audio, audio-to-text, and text-to-audio together.
C3LLM adapts the Large Language Model (LLM) structure as a bridge for aligning different modalities.
Our method combines the previous tasks of audio understanding, video-to-audio generation, and text-to-audio generation together into one unified model.
arXiv Detail & Related papers (2024-05-25T09:10:12Z) - Efficient Parallel Audio Generation using Group Masked Language Modeling [13.82115484420239]
Group-Masked Language Modeling(G-MLM) and Group Iterative Parallel Decoding(G-IPD)
We present a fast and high-quality language model for parallel audio generation.
arXiv Detail & Related papers (2024-01-02T08:42:48Z) - Audiobox: Unified Audio Generation with Natural Language Prompts [37.39834044113061]
This paper presents Audiobox, a unified model based on flow-matching that is capable of generating various audio modalities.
We design description-based and example-based prompting to enhance controllability and unify speech and sound generation paradigms.
Audiobox sets new benchmarks on speech and sound generation and unlocks new methods for generating audio with novel vocal and acoustic styles.
arXiv Detail & Related papers (2023-12-25T22:24:49Z) - Make-An-Audio: Text-To-Audio Generation with Prompt-Enhanced Diffusion
Models [65.18102159618631]
multimodal generative modeling has created milestones in text-to-image and text-to-video generation.
Its application to audio still lags behind for two main reasons: the lack of large-scale datasets with high-quality text-audio pairs, and the complexity of modeling long continuous audio data.
We propose Make-An-Audio with a prompt-enhanced diffusion model that addresses these gaps.
arXiv Detail & Related papers (2023-01-30T04:44:34Z) - LA-VocE: Low-SNR Audio-visual Speech Enhancement using Neural Vocoders [53.30016986953206]
We propose LA-VocE, a new two-stage approach that predicts mel-spectrograms from noisy audio-visual speech via a transformer-based architecture.
We train and evaluate our framework on thousands of speakers and 11+ different languages, and study our model's ability to adapt to different levels of background noise and speech interference.
arXiv Detail & Related papers (2022-11-20T15:27:55Z) - AudioGen: Textually Guided Audio Generation [116.57006301417306]
We tackle the problem of generating audio samples conditioned on descriptive text captions.
In this work, we propose AaudioGen, an auto-regressive model that generates audio samples conditioned on text inputs.
arXiv Detail & Related papers (2022-09-30T10:17:05Z) - AudioLM: a Language Modeling Approach to Audio Generation [59.19364975706805]
We introduce AudioLM, a framework for high-quality audio generation with long-term consistency.
We show how existing audio tokenizers provide different trade-offs between reconstruction quality and long-term structure.
We demonstrate how our approach extends beyond speech by generating coherent piano music continuations.
arXiv Detail & Related papers (2022-09-07T13:40:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.