Speaking in Wavelet Domain: A Simple and Efficient Approach to Speed up
Speech Diffusion Model
- URL: http://arxiv.org/abs/2402.10642v1
- Date: Fri, 16 Feb 2024 12:43:01 GMT
- Title: Speaking in Wavelet Domain: A Simple and Efficient Approach to Speed up
Speech Diffusion Model
- Authors: Xiangyu Zhang, Daijiao Liu, Hexin Liu, Qiquan Zhang, Hanyu Meng,
Leibny Paola Garcia, Eng Siong Chng, Lina Yao
- Abstract summary: Denoising Diffusion Probabilistic Models (DDPMs) have attained leading performances across a diverse range of generative tasks.
We propose an inquiry: is it possible to enhance the training/inference speed and performance of DDPMs by modifying the speech signal itself?
In this paper, we double the training and inference speed of Speech DDPMs by simply redirecting the generative target to the wavelet domain.
- Score: 32.09697176638031
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, Denoising Diffusion Probabilistic Models (DDPMs) have attained
leading performances across a diverse range of generative tasks. However, in
the field of speech synthesis, although DDPMs exhibit impressive performance,
their long training duration and substantial inference costs hinder practical
deployment. Existing approaches primarily focus on enhancing inference speed,
while approaches to accelerate training a key factor in the costs associated
with adding or customizing voices often necessitate complex modifications to
the model, compromising their universal applicability. To address the
aforementioned challenges, we propose an inquiry: is it possible to enhance the
training/inference speed and performance of DDPMs by modifying the speech
signal itself? In this paper, we double the training and inference speed of
Speech DDPMs by simply redirecting the generative target to the wavelet domain.
This method not only achieves comparable or superior performance to the
original model in speech synthesis tasks but also demonstrates its versatility.
By investigating and utilizing different wavelet bases, our approach proves
effective not just in speech synthesis, but also in speech enhancement.
Related papers
- Pre-training Feature Guided Diffusion Model for Speech Enhancement [37.88469730135598]
Speech enhancement significantly improves the clarity and intelligibility of speech in noisy environments.
We introduce a novel pretraining feature-guided diffusion model tailored for efficient speech enhancement.
arXiv Detail & Related papers (2024-06-11T18:22:59Z) - uSee: Unified Speech Enhancement and Editing with Conditional Diffusion
Models [57.71199494492223]
We propose a Unified Speech Enhancement and Editing (uSee) model with conditional diffusion models to handle various tasks at the same time in a generative manner.
Our experiments show that our proposed uSee model can achieve superior performance in both speech denoising and dereverberation compared to other related generative speech enhancement models.
arXiv Detail & Related papers (2023-10-02T04:36:39Z) - High-Fidelity Speech Synthesis with Minimal Supervision: All Using
Diffusion Models [56.00939852727501]
Minimally-supervised speech synthesis decouples TTS by combining two types of discrete speech representations.
Non-autoregressive framework enhances controllability, and duration diffusion model enables diversified prosodic expression.
arXiv Detail & Related papers (2023-09-27T09:27:03Z) - Diffusion Conditional Expectation Model for Efficient and Robust Target
Speech Extraction [73.43534824551236]
We propose an efficient generative approach named Conditional Diffusion Expectation Model (DCEM) for Target Speech Extraction (TSE)
It can handle multi- and single-speaker scenarios in both noisy and clean conditions.
Our method outperforms conventional methods in terms of both intrusive and non-intrusive metrics.
arXiv Detail & Related papers (2023-09-25T04:58:38Z) - Unsupervised speech enhancement with diffusion-based generative models [0.0]
We introduce an alternative approach that operates in an unsupervised manner, leveraging the generative power of diffusion models.
We develop a posterior sampling methodology for speech enhancement by combining the learnt clean speech prior with a noise model for speech signal inference.
We show promising results compared to a recent variational auto-encoder (VAE)-based unsupervised approach and a state-of-the-art diffusion-based supervised method.
arXiv Detail & Related papers (2023-09-19T09:11:31Z) - Robust Automatic Speech Recognition via WavAugment Guided Phoneme
Adversarial Training [20.33516009339207]
We propose a novel WavAugment Guided Phoneme Adrial Training (wapat)
wapat use adversarial examples in phoneme space as augmentation to make the model invariant to minor fluctuations in phoneme representation.
In addition, wapat utilizes the phoneme representation of augmented samples to guide the generation of adversaries, which helps to find more stable and diverse gradient-directions.
arXiv Detail & Related papers (2023-07-24T03:07:40Z) - UnDiff: Unsupervised Voice Restoration with Unconditional Diffusion
Model [1.0874597293913013]
UnDiff is a diffusion probabilistic model capable of solving various speech inverse tasks.
It can be adapted to different tasks including inversion degradation, neural vocoding, and source separation.
arXiv Detail & Related papers (2023-06-01T14:22:55Z) - A Study on Speech Enhancement Based on Diffusion Probabilistic Model [63.38586161802788]
We propose a diffusion probabilistic model-based speech enhancement model (DiffuSE) model that aims to recover clean speech signals from noisy signals.
The experimental results show that DiffuSE yields performance that is comparable to related audio generative models on the standardized Voice Bank corpus task.
arXiv Detail & Related papers (2021-07-25T19:23:18Z) - Time-domain Speech Enhancement with Generative Adversarial Learning [53.74228907273269]
This paper proposes a new framework called Time-domain Speech Enhancement Generative Adversarial Network (TSEGAN)
TSEGAN is an extension of the generative adversarial network (GAN) in time-domain with metric evaluation to mitigate the scaling problem.
In addition, we provide a new method based on objective function mapping for the theoretical analysis of the performance of Metric GAN.
arXiv Detail & Related papers (2021-03-30T08:09:49Z) - Real Time Speech Enhancement in the Waveform Domain [99.02180506016721]
We present a causal speech enhancement model working on the raw waveform that runs in real-time on a laptop CPU.
The proposed model is based on an encoder-decoder architecture with skip-connections.
It is capable of removing various kinds of background noise including stationary and non-stationary noises.
arXiv Detail & Related papers (2020-06-23T09:19:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.