Continuous Modeling of the Denoising Process for Speech Enhancement
Based on Deep Learning
- URL: http://arxiv.org/abs/2309.09270v2
- Date: Sun, 7 Jan 2024 15:52:31 GMT
- Title: Continuous Modeling of the Denoising Process for Speech Enhancement
Based on Deep Learning
- Authors: Zilu Guo, Jun Du, CHin-Hui Lee
- Abstract summary: We use a state variable to indicate the denoising process.
A UNet-like neural network learns to estimate every state variable sampled from the continuous denoising process.
Experimental results indicate that preserving a small amount of noise in the clean target benefits speech enhancement.
- Score: 61.787485727134424
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we explore a continuous modeling approach for
deep-learning-based speech enhancement, focusing on the denoising process. We
use a state variable to indicate the denoising process. The starting state is
noisy speech and the ending state is clean speech. The noise component in the
state variable decreases with the change of the state index until the noise
component is 0. During training, a UNet-like neural network learns to estimate
every state variable sampled from the continuous denoising process. In testing,
we introduce a controlling factor as an embedding, ranging from zero to one, to
the neural network, allowing us to control the level of noise reduction. This
approach enables controllable speech enhancement and is adaptable to various
application scenarios. Experimental results indicate that preserving a small
amount of noise in the clean target benefits speech enhancement, as evidenced
by improvements in both objective speech measures and automatic speech
recognition performance.
Related papers
- Large Language Models are Efficient Learners of Noise-Robust Speech
Recognition [65.95847272465124]
Recent advances in large language models (LLMs) have promoted generative error correction (GER) for automatic speech recognition (ASR)
In this work, we extend the benchmark to noisy conditions and investigate if we can teach LLMs to perform denoising for GER.
Experiments on various latest LLMs demonstrate our approach achieves a new breakthrough with up to 53.9% correction improvement in terms of word error rate.
arXiv Detail & Related papers (2024-01-19T01:29:27Z) - Diffusion-based speech enhancement with a weighted generative-supervised
learning loss [0.0]
Diffusion-based generative models have recently gained attention in speech enhancement (SE)
We propose augmenting the original diffusion training objective with a mean squared error (MSE) loss, measuring the discrepancy between estimated enhanced speech and ground-truth clean speech.
arXiv Detail & Related papers (2023-09-19T09:13:35Z) - DiffSED: Sound Event Detection with Denoising Diffusion [70.18051526555512]
We reformulate the SED problem by taking a generative learning perspective.
Specifically, we aim to generate sound temporal boundaries from noisy proposals in a denoising diffusion process.
During training, our model learns to reverse the noising process by converting noisy latent queries to the groundtruth versions.
arXiv Detail & Related papers (2023-08-14T17:29:41Z) - Improving the Intent Classification accuracy in Noisy Environment [9.447108578893639]
In this paper, we investigate how environmental noise and related noise reduction techniques to address the intent classification task with end-to-end neural models.
For this task, the use of speech enhancement greatly improves the classification accuracy in noisy conditions.
arXiv Detail & Related papers (2023-03-12T06:11:44Z) - Robust Time Series Denoising with Learnable Wavelet Packet Transform [1.370633147306388]
In many applications, signal denoising is often the first pre-processing step before any subsequent analysis or learning task.
We propose to apply a deep learning denoising model inspired by a signal processing, a learnable version of wavelet packet transform.
We demonstrate how the proposed algorithm relates to the universality of signal processing methods and the learning capabilities of deep learning approaches.
arXiv Detail & Related papers (2022-06-13T13:05:58Z) - Improving Noise Robustness of Contrastive Speech Representation Learning
with Speech Reconstruction [109.44933866397123]
Noise robustness is essential for deploying automatic speech recognition systems in real-world environments.
We employ a noise-robust representation learned by a refined self-supervised framework for noisy speech recognition.
We achieve comparable performance to the best supervised approach reported with only 16% of labeled data.
arXiv Detail & Related papers (2021-10-28T20:39:02Z) - Distribution Conditional Denoising: A Flexible Discriminative Image
Denoiser [0.0]
A flexible discriminative image denoiser is introduced in which multi-task learning methods are applied to a densoising FCN based on U-Net.
It has been shown that this conditional training method can generalise a fixed noise level U-Net denoiser to a variety of noise levels.
arXiv Detail & Related papers (2020-11-24T21:27:18Z) - CITISEN: A Deep Learning-Based Speech Signal-Processing Mobile
Application [63.2243126704342]
This study presents a deep learning-based speech signal-processing mobile application known as CITISEN.
The CITISEN provides three functions: speech enhancement (SE), model adaptation (MA), and background noise conversion (BNC)
Compared with the noisy speech signals, the enhanced speech signals achieved about 6% and 33% of improvements.
arXiv Detail & Related papers (2020-08-21T02:04:12Z) - Simultaneous Denoising and Dereverberation Using Deep Embedding Features [64.58693911070228]
We propose a joint training method for simultaneous speech denoising and dereverberation using deep embedding features.
At the denoising stage, the DC network is leveraged to extract noise-free deep embedding features.
At the dereverberation stage, instead of using the unsupervised K-means clustering algorithm, another neural network is utilized to estimate the anechoic speech.
arXiv Detail & Related papers (2020-04-06T06:34:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.