SpeechNAS: Towards Better Trade-off between Latency and Accuracy for
Large-Scale Speaker Verification
- URL: http://arxiv.org/abs/2109.08839v1
- Date: Sat, 18 Sep 2021 05:31:27 GMT
- Title: SpeechNAS: Towards Better Trade-off between Latency and Accuracy for
Large-Scale Speaker Verification
- Authors: Wentao Zhu, Tianlong Kong, Shun Lu, Jixiang Li, Dawei Zhang, Feng
Deng, Xiaorui Wang, Sen Yang, Ji Liu
- Abstract summary: In this work, we try to identify the optimal architectures from a TDNN based search space employing neural architecture search (NAS)
Our derived best neural network achieves an equal error rate (EER) of 1.02% on the standard test set of VoxCeleb1, which surpasses previous TDNN based state-of-the-art approaches by a large margin.
- Score: 26.028985033942735
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, x-vector has been a successful and popular approach for speaker
verification, which employs a time delay neural network (TDNN) and statistics
pooling to extract speaker characterizing embedding from variable-length
utterances. Improvement upon the x-vector has been an active research area, and
enormous neural networks have been elaborately designed based on the x-vector,
eg, extended TDNN (E-TDNN), factorized TDNN (F-TDNN), and densely connected
TDNN (D-TDNN). In this work, we try to identify the optimal architectures from
a TDNN based search space employing neural architecture search (NAS), named
SpeechNAS. Leveraging the recent advances in the speaker recognition, such as
high-order statistics pooling, multi-branch mechanism, D-TDNN and angular
additive margin softmax (AAM) loss with a minimum hyper-spherical energy (MHE),
SpeechNAS automatically discovers five network architectures, from SpeechNAS-1
to SpeechNAS-5, of various numbers of parameters and GFLOPs on the large-scale
text-independent speaker recognition dataset VoxCeleb1. Our derived best neural
network achieves an equal error rate (EER) of 1.02% on the standard test set of
VoxCeleb1, which surpasses previous TDNN based state-of-the-art approaches by a
large margin. Code and trained weights are in
https://github.com/wentaozhu/speechnas.git
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via
Automating Deep Neural Network Porting for Mobile Deployment [54.77943671991863]
MatchNAS is a novel scheme for porting Deep Neural Networks to mobile devices.
We optimise a large network family using both labelled and unlabelled data.
We then automatically search for tailored networks for different hardware platforms.
arXiv Detail & Related papers (2024-02-21T04:43:12Z) - On the Computational Complexity and Formal Hierarchy of Second Order
Recurrent Neural Networks [59.85314067235965]
We extend the theoretical foundation for the $2nd$-order recurrent network ($2nd$ RNN)
We prove there exists a class of a $2nd$ RNN that is Turing-complete with bounded time.
We also demonstrate that $2$nd order RNNs, without memory, outperform modern-day models such as vanilla RNNs and gated recurrent units in recognizing regular grammars.
arXiv Detail & Related papers (2023-09-26T06:06:47Z) - INK: Injecting kNN Knowledge in Nearest Neighbor Machine Translation [57.952478914459164]
kNN-MT has provided an effective paradigm to smooth the prediction based on neighbor representations during inference.
We propose an effective training framework INK to directly smooth the representation space via adjusting representations of kNN neighbors with a small number of new parameters.
Experiments on four benchmark datasets show that method achieves average gains of 1.99 COMET and 1.0 BLEU, outperforming the state-of-the-art kNN-MT system with 0.02x memory space and 1.9x inference speedup.
arXiv Detail & Related papers (2023-06-10T08:39:16Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Beyond Classification: Directly Training Spiking Neural Networks for
Semantic Segmentation [5.800785186389827]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
In this paper, we explore the SNN applications beyond classification and present semantic segmentation networks configured with spiking neurons.
arXiv Detail & Related papers (2021-10-14T21:53:03Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Depthwise Separable Convolutions Versus Recurrent Neural Networks for
Monaural Singing Voice Separation [17.358040670413505]
We focus on singing voice separation, employing an RNN architecture, and we replace the RNNs with DWS convolutions (DWS-CNNs)
We conduct an ablation study and examine the effect of the number of channels and layers of DWS-CNNs on the source separation performance.
Our results show that by replacing RNNs with DWS-CNNs yields an improvement of 1.20, 0.06, 0.37 dB, respectively, while using only 20.57% of the amount of parameters of the RNN architecture.
arXiv Detail & Related papers (2020-07-06T12:32:34Z) - Crossed-Time Delay Neural Network for Speaker Recognition [5.216353911330589]
We introduce a novel structure Crossed-Time Delay Neural Network (CTDNN) to enhance the performance of current TDNN.
The proposed CTDNN gives significant improvements over original TDNN on both speaker verification and identification tasks.
arXiv Detail & Related papers (2020-05-31T06:57:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.