A Streamwise GAN Vocoder for Wideband Speech Coding at Very Low Bit Rate
- URL: http://arxiv.org/abs/2108.04051v1
- Date: Mon, 9 Aug 2021 14:03:07 GMT
- Title: A Streamwise GAN Vocoder for Wideband Speech Coding at Very Low Bit Rate
- Authors: Ahmed Mustafa, Jan B\"uthe, Srikanth Korse, Kishan Gupta, Guillaume
Fuchs, Nicola Pia
- Abstract summary: We present a GAN vocoder which is able to generate wideband speech waveforms from parameters coded at 1.6 kbit/s.
The proposed model is a modified version of the StyleMelGAN vocoder that can run in frame-by-frame manner.
- Score: 8.312162364318235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, GAN vocoders have seen rapid progress in speech synthesis, starting
to outperform autoregressive models in perceptual quality with much higher
generation speed. However, autoregressive vocoders are still the common choice
for neural generation of speech signals coded at very low bit rates. In this
paper, we present a GAN vocoder which is able to generate wideband speech
waveforms from parameters coded at 1.6 kbit/s. The proposed model is a modified
version of the StyleMelGAN vocoder that can run in frame-by-frame manner,
making it suitable for streaming applications. The experimental results show
that the proposed model significantly outperforms prior autoregressive vocoders
like LPCNet for very low bit rate speech coding, with computational complexity
of about 5 GMACs, providing a new state of the art in this domain. Moreover,
this streamwise adversarial vocoder delivers quality competitive to advanced
speech codecs such as EVS at 5.9 kbit/s on clean speech, which motivates
further usage of feed-forward fully-convolutional models for low bit rate
speech coding.
Related papers
- PromptCodec: High-Fidelity Neural Speech Codec using Disentangled Representation Learning based Adaptive Feature-aware Prompt Encoders [6.375882733058943]
We propose PromptCodec, a novel end-to-end neural speech using feature-aware prompt encoders.
Our proposed PromptCodec consistently outperforms state-of-theart neural speech models under all different conditions.
arXiv Detail & Related papers (2024-04-03T13:00:08Z) - Framewise WaveGAN: High Speed Adversarial Vocoder in Time Domain with
Very Low Computational Complexity [23.49462995118466]
Framewise WaveGAN vocoder achieves higher quality than auto-regressive maximum-likelihood vocoders such as LPCNet at a very low complexity of 1.2 GFLOPS.
This makes GAN vocoders more practical on edge and low-power devices.
arXiv Detail & Related papers (2022-12-08T19:38:34Z) - Diffsound: Discrete Diffusion Model for Text-to-sound Generation [78.4128796899781]
We propose a novel text-to-sound generation framework that consists of a text encoder, a Vector Quantized Variational Autoencoder (VQ-VAE), a decoder, and a vocoder.
The framework first uses the decoder to transfer the text features extracted from the text encoder to a mel-spectrogram with the help of VQ-VAE, and then the vocoder is used to transform the generated mel-spectrogram into a waveform.
arXiv Detail & Related papers (2022-07-20T15:41:47Z) - Ultra-Low-Bitrate Speech Coding with Pretrained Transformers [28.400364949575103]
Speech coding facilitates the transmission of speech over low-bandwidth networks with minimal distortion.
We use pretrained Transformers, capable of exploiting long-range dependencies in the input signal due to their inductive bias.
arXiv Detail & Related papers (2022-07-05T18:52:11Z) - Pre-Training Transformer Decoder for End-to-End ASR Model with Unpaired
Speech Data [145.95460945321253]
We introduce two pre-training tasks for the encoder-decoder network using acoustic units, i.e., pseudo codes.
The proposed Speech2C can relatively reduce the word error rate (WER) by 19.2% over the method without decoder pre-training.
arXiv Detail & Related papers (2022-03-31T15:33:56Z) - NeuralDPS: Neural Deterministic Plus Stochastic Model with Multiband
Excitation for Noise-Controllable Waveform Generation [67.96138567288197]
We propose a novel neural vocoder named NeuralDPS which can retain high speech quality and acquire high synthesis efficiency and noise controllability.
It generates waveforms at least 280 times faster than the WaveNet vocoder.
It is also 28% faster than WaveGAN's synthesis efficiency on a single core.
arXiv Detail & Related papers (2022-03-05T08:15:29Z) - Adversarial Neural Networks for Error Correcting Codes [76.70040964453638]
We introduce a general framework to boost the performance and applicability of machine learning (ML) models.
We propose to combine ML decoders with a competing discriminator network that tries to distinguish between codewords and noisy words.
Our framework is game-theoretic, motivated by generative adversarial networks (GANs)
arXiv Detail & Related papers (2021-12-21T19:14:44Z) - SoundStream: An End-to-End Neural Audio Codec [78.94923131038682]
We present SoundStream, a novel neural audio system that can efficiently compress speech, music and general audio.
SoundStream relies on a fully convolutional encoder/decoder network and a residual vector quantizer, which are trained jointly end-to-end.
We are able to perform joint compression and enhancement either at the encoder or at the decoder side with no additional latency.
arXiv Detail & Related papers (2021-07-07T15:45:42Z) - Scalable and Efficient Neural Speech Coding [24.959825692325445]
This work presents a scalable and efficient neural waveform (NWC) for speech compression.
The proposed CNN autoencoder also defines quantization and coding as a trainable module.
Compared to the other autoregressive decoder-based neural speech, our decoder has significantly smaller architecture.
arXiv Detail & Related papers (2021-03-27T00:10:16Z) - Low Bit-Rate Wideband Speech Coding: A Deep Generative Model based
Approach [4.02517560480215]
Traditional low bit-rate speech coding approach only handles narrowband speech at 8kHz.
This paper presents a new approach through vector quantization (VQ) of mel-frequency cepstral coefficients (MFCCs)
It provides better speech quality compared with the state-of-the-art classic MELPegressive at lower bit-rate.
arXiv Detail & Related papers (2021-02-04T14:37:16Z) - Streaming automatic speech recognition with the transformer model [59.58318952000571]
We propose a transformer based end-to-end ASR system for streaming ASR.
We apply time-restricted self-attention for the encoder and triggered attention for the encoder-decoder attention mechanism.
Our proposed streaming transformer architecture achieves 2.8% and 7.2% WER for the "clean" and "other" test data of LibriSpeech.
arXiv Detail & Related papers (2020-01-08T18:58:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.