Neural Network-augmented Kalman Filtering for Robust Online Speech
Dereverberation in Noisy Reverberant Environments
- URL: http://arxiv.org/abs/2204.02741v1
- Date: Wed, 6 Apr 2022 11:38:04 GMT
- Title: Neural Network-augmented Kalman Filtering for Robust Online Speech
Dereverberation in Noisy Reverberant Environments
- Authors: Jean-Marie Lemercier, Joachim Thiemann, Raphael Koning, Timo Gerkmann
- Abstract summary: A neural network-augmented algorithm for noise-robust online dereverberation is proposed.
The presented framework allows for robust dereverberation on a single-channel noisy reverberant dataset.
- Score: 13.49645012479288
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, a neural network-augmented algorithm for noise-robust online
dereverberation with a Kalman filtering variant of the weighted prediction
error (WPE) method is proposed. The filter stochastic variations are predicted
by a deep neural network (DNN) trained end-to-end using the filter residual
error and signal characteristics. The presented framework allows for robust
dereverberation on a single-channel noisy reverberant dataset similar to
WHAMR!. The Kalman filtering WPE introduces distortions in the enhanced signal
when predicting the filter variations from the residual error only, if the
target speech power spectral density is not perfectly known and the observation
is noisy. The proposed approach avoids these distortions by correcting the
filter variations estimation in a data-driven way, increasing the robustness of
the method to noisy scenarios. Furthermore, it yields a strong dereverberation
and denoising performance compared to a DNN-supported recursive least squares
variant of WPE, especially for highly noisy inputs.
Related papers
- Run-Time Adaptation of Neural Beamforming for Robust Speech Dereverberation and Denoising [15.152748065111194]
This paper describes speech enhancement for realtime automatic speech recognition in real environments.
It estimates the masks of clean dry speech from a noisy echoic mixture spectrogram with a deep neural network (DNN) and then computes a enhancement filter used for beamforming.
The performance of such a supervised approach, however, is drastically degraded under mismatched conditions.
arXiv Detail & Related papers (2024-10-30T08:32:47Z) - Low-rank extended Kalman filtering for online learning of neural
networks from streaming data [71.97861600347959]
We propose an efficient online approximate Bayesian inference algorithm for estimating the parameters of a nonlinear function from a potentially non-stationary data stream.
The method is based on the extended Kalman filter (EKF), but uses a novel low-rank plus diagonal decomposition of the posterior matrix.
In contrast to methods based on variational inference, our method is fully deterministic, and does not require step-size tuning.
arXiv Detail & Related papers (2023-05-31T03:48:49Z) - A DNN based Normalized Time-frequency Weighted Criterion for Robust
Wideband DoA Estimation [24.175086158375464]
We propose a normalized time-frequency weighted criterion which minimizes the distance between the candidate steering vectors and the filtered snapshots in the T-F domain.
Our method requires no eigendecomposition and uses a simple normalization to prevent the optimization objective from being misled by noisy snapshots.
Experiments show that the proposed method outperforms popular DNN based DoA estimation methods including widely used subspace methods in noisy and reverberant environments.
arXiv Detail & Related papers (2023-02-20T18:26:52Z) - A Data-Driven Gaussian Process Filter for Electrocardiogram Denoising [5.359295206355495]
The proposed GP filter is evaluated and compared with a state-of-the-art wavelet-based filter on the PhysioNet QT Database.
It is shown that the proposed GP filter outperforms the benchmark filter for all the tested noise levels.
It also outperforms the state-of-the-art filter in terms of QT-interval estimation error bias and variance.
arXiv Detail & Related papers (2023-01-06T17:09:20Z) - CFNet: Conditional Filter Learning with Dynamic Noise Estimation for
Real Image Denoising [37.29552796977652]
This paper considers real noise approximated by heteroscedastic Gaussian/Poisson Gaussian distributions with in-camera signal processing pipelines.
We propose a novel conditional filter in which the optimal kernels for different feature positions can be adaptively inferred by local features from the image and the noise map.
Also, we bring the thought that alternatively performs noise estimation and non-blind denoising into CNN structure, which continuously updates noise prior to guide the iterative feature denoising.
arXiv Detail & Related papers (2022-11-26T14:28:54Z) - Boundary between noise and information applied to filtering neural
network weight matrices [0.0]
We introduce an algorithm for noise filtering, which both removes small singular values and reduces the magnitude of large singular values.
For networks trained in the presence of label noise, we indeed find that the generalization performance improves significantly due to noise filtering.
arXiv Detail & Related papers (2022-06-08T14:42:36Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Filter-enhanced MLP is All You Need for Sequential Recommendation [89.0974365344997]
In online platforms, logged user behavior data is inevitable to contain noise.
We borrow the idea of filtering algorithms from signal processing that attenuates the noise in the frequency domain.
We propose textbfFMLP-Rec, an all-MLP model with learnable filters for sequential recommendation task.
arXiv Detail & Related papers (2022-02-28T05:49:35Z) - Batch Normalization Tells You Which Filter is Important [49.903610684578716]
We propose a simple yet effective filter pruning method by evaluating the importance of each filter based on the BN parameters of pre-trained CNNs.
The experimental results on CIFAR-10 and ImageNet demonstrate that the proposed method can achieve outstanding performance.
arXiv Detail & Related papers (2021-12-02T12:04:59Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Deep Networks for Direction-of-Arrival Estimation in Low SNR [89.45026632977456]
We introduce a Convolutional Neural Network (CNN) that is trained from mutli-channel data of the true array manifold matrix.
We train a CNN in the low-SNR regime to predict DoAs across all SNRs.
Our robust solution can be applied in several fields, ranging from wireless array sensors to acoustic microphones or sonars.
arXiv Detail & Related papers (2020-11-17T12:52:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.