Integrating Uncertainty into Neural Network-based Speech Enhancement
- URL: http://arxiv.org/abs/2305.08744v1
- Date: Mon, 15 May 2023 15:55:12 GMT
- Title: Integrating Uncertainty into Neural Network-based Speech Enhancement
- Authors: Huajian Fang, Dennis Becker, Stefan Wermter and Timo Gerkmann
- Abstract summary: Supervised masking approaches in the time-frequency domain aim to employ deep neural networks to estimate a multiplicative mask to extract clean speech.
This leads to a single estimate for each input without any guarantees or measures of reliability.
We study the benefits of modeling uncertainty in clean speech estimation.
- Score: 27.868722093985006
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Supervised masking approaches in the time-frequency domain aim to employ deep
neural networks to estimate a multiplicative mask to extract clean speech. This
leads to a single estimate for each input without any guarantees or measures of
reliability. In this paper, we study the benefits of modeling uncertainty in
clean speech estimation. Prediction uncertainty is typically categorized into
aleatoric uncertainty and epistemic uncertainty. The former refers to inherent
randomness in data, while the latter describes uncertainty in the model
parameters. In this work, we propose a framework to jointly model aleatoric and
epistemic uncertainties in neural network-based speech enhancement. The
proposed approach captures aleatoric uncertainty by estimating the statistical
moments of the speech posterior distribution and explicitly incorporates the
uncertainty estimate to further improve clean speech estimation. For epistemic
uncertainty, we investigate two Bayesian deep learning approaches: Monte Carlo
dropout and Deep ensembles to quantify the uncertainty of the neural network
parameters. Our analyses show that the proposed framework promotes capturing
practical and reliable uncertainty, while combining different sources of
uncertainties yields more reliable predictive uncertainty estimates.
Furthermore, we demonstrate the benefits of modeling uncertainty on speech
enhancement performance by evaluating the framework on different datasets,
exhibiting notable improvement over comparable models that fail to account for
uncertainty.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Uncertainty Estimation in Deep Speech Enhancement Using Complex Gaussian
Mixture Models [19.442685015494316]
Single-channel deep speech enhancement approaches often estimate a single multiplicative mask to extract clean speech without a measure of its accuracy.
We propose to quantify the uncertainty associated with clean speech estimates in neural network-based speech enhancement.
arXiv Detail & Related papers (2022-12-09T13:03:09Z) - Looking at the posterior: accuracy and uncertainty of neural-network
predictions [0.0]
We show that prediction accuracy depends on both epistemic and aleatoric uncertainty.
We introduce a novel acquisition function that outperforms common uncertainty-based methods.
arXiv Detail & Related papers (2022-11-26T16:13:32Z) - Uncertainty Quantification for Traffic Forecasting: A Unified Approach [21.556559649467328]
Uncertainty is an essential consideration for time series forecasting tasks.
In this work, we focus on quantifying the uncertainty of traffic forecasting.
We develop Deep S-Temporal Uncertainty Quantification (STUQ), which can estimate both aleatoric and relational uncertainty.
arXiv Detail & Related papers (2022-08-11T15:21:53Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.