Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification
- URL: http://arxiv.org/abs/2004.04014v1
- Date: Wed, 8 Apr 2020 14:35:12 GMT
- Title: Bayesian x-vector: Bayesian Neural Network based x-vector System for
Speaker Verification
- Authors: Xu Li, Jinghua Zhong, Jianwei Yu, Shoukang Hu, Xixin Wu, Xunying Liu,
Helen Meng
- Abstract summary: We incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN) x-vector speaker verification system.
With the weight uncertainty modeling provided by BNNs, we expect the system could generalize better on the evaluation data.
Results show that the system could benefit from BNNs by a relative EER decrease of 2.66% and 2.32% respectively for short- and long-utterance in-domain evaluations.
- Score: 71.45033077934723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speaker verification systems usually suffer from the mismatch problem between
training and evaluation data, such as speaker population mismatch, the channel
and environment variations. In order to address this issue, it requires the
system to have good generalization ability on unseen data. In this work, we
incorporate Bayesian neural networks (BNNs) into the deep neural network (DNN)
x-vector speaker verification system to improve the system's generalization
ability. With the weight uncertainty modeling provided by BNNs, we expect the
system could generalize better on the evaluation data and make verification
decisions more accurately. Our experiment results indicate that the DNN
x-vector system could benefit from BNNs especially when the mismatch problem is
severe for evaluations using out-of-domain data. Specifically, results show
that the system could benefit from BNNs by a relative EER decrease of 2.66% and
2.32% respectively for short- and long-utterance in-domain evaluations.
Additionally, the fusion of DNN x-vector and Bayesian x-vector systems could
achieve further improvement. Moreover, experiments conducted by out-of-domain
evaluations, e.g. models trained on Voxceleb1 while evaluated on NIST SRE10
core test, suggest that BNNs could bring a larger relative EER decrease of
around 4.69%.
Related papers
- Harnessing Neuron Stability to Improve DNN Verification [42.65507402735545]
We present VeriStable, a novel extension of recently proposed DPLL-based constraint DNN verification approach.
We evaluate the effectiveness of VeriStable across a range of challenging benchmarks including fully-connected feed networks (FNNs), convolutional neural networks (CNNs) and residual networks (ResNets)
Preliminary results show that VeriStable is competitive and outperforms state-of-the-art verification tools, including $alpha$-$beta$-CROWN and MN-BaB, the first and second performers of the VNN-COMP, respectively.
arXiv Detail & Related papers (2024-01-19T23:48:04Z) - Sparsifying Bayesian neural networks with latent binary variables and
normalizing flows [10.865434331546126]
We will consider two extensions to the latent binary Bayesian neural networks (LBBNN) method.
Firstly, by using the local reparametrization trick (LRT) to sample the hidden units directly, we get a more computationally efficient algorithm.
More importantly, by using normalizing flows on the variational posterior distribution of the LBBNN parameters, the network learns a more flexible variational posterior distribution than the mean field Gaussian.
arXiv Detail & Related papers (2023-05-05T09:40:28Z) - Searching Similarity Measure for Binarized Neural Networks [14.847148292246374]
Binarized Neural Networks (BNNs) are a promising model to be deployed in resource-limited devices.
BNNs suffer from non-trivial accuracy degradation, limiting its applicability in various domains.
We propose an automatic searching method, based on genetic algorithm, for BNN-tailored similarity measure.
arXiv Detail & Related papers (2022-06-05T06:53:53Z) - Robustness of Bayesian Neural Networks to White-Box Adversarial Attacks [55.531896312724555]
Bayesian Networks (BNNs) are robust and adept at handling adversarial attacks by incorporating randomness.
We create our BNN model, called BNN-DenseNet, by fusing Bayesian inference (i.e., variational Bayes) to the DenseNet architecture.
An adversarially-trained BNN outperforms its non-Bayesian, adversarially-trained counterpart in most experiments.
arXiv Detail & Related papers (2021-11-16T16:14:44Z) - Shift-Robust GNNs: Overcoming the Limitations of Localized Graph
Training data [52.771780951404565]
Shift-Robust GNN (SR-GNN) is designed to account for distributional differences between biased training data and the graph's true inference distribution.
We show that SR-GNN outperforms other GNN baselines by accuracy, eliminating at least (40%) of the negative effects introduced by biased training data.
arXiv Detail & Related papers (2021-08-02T18:00:38Z) - S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural
Networks via Guided Distribution Calibration [74.5509794733707]
We present a novel guided learning paradigm from real-valued to distill binary networks on the final prediction distribution.
Our proposed method can boost the simple contrastive learning baseline by an absolute gain of 5.515% on BNNs.
Our method achieves substantial improvement over the simple contrastive learning baseline, and is even comparable to many mainstream supervised BNN methods.
arXiv Detail & Related papers (2021-02-17T18:59:28Z) - pseudo-Bayesian Neural Networks for detecting Out of Distribution Inputs [12.429095025814345]
We propose pseudo-BNNs where instead of learning distributions over weights, we use point estimates and perturb weights at the time of inference.
Overall, this combination results in a principled technique to detect OOD samples at the time of inference.
arXiv Detail & Related papers (2021-02-02T06:23:04Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - Prior choice affects ability of Bayesian neural networks to identify
unknowns [0.0]
We show that the choice of priors has a substantial impact on the ability of the model to confidently assign data to the correct class.
We also show that testing alternative options can improve the performance of BNNs.
arXiv Detail & Related papers (2020-05-11T10:32:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.