On the Prediction Network Architecture in RNN-T for ASR
- URL: http://arxiv.org/abs/2206.14618v1
- Date: Wed, 29 Jun 2022 13:11:46 GMT
- Title: On the Prediction Network Architecture in RNN-T for ASR
- Authors: Dario Albesano and Jes\'us Andr\'es-Ferrer and Nicola Ferri and Puming
Zhan
- Abstract summary: We compare 4 types of prediction networks based on a common state-of-the-art Conformer encoder.
Inspired by our scoreboard, we propose a new simple prediction network architecture, N-Concat.
- Score: 1.7262456746016954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: RNN-T models have gained popularity in the literature and in commercial
systems because of their competitiveness and capability of operating in online
streaming mode. In this work, we conduct an extensive study comparing several
prediction network architectures for both monotonic and original RNN-T models.
We compare 4 types of prediction networks based on a common state-of-the-art
Conformer encoder and report results obtained on Librispeech and an internal
medical conversation data set. Our study covers both offline batch-mode and
online streaming scenarios. In contrast to some previous works, our results
show that Transformer does not always outperform LSTM when used as prediction
network along with Conformer encoder. Inspired by our scoreboard, we propose a
new simple prediction network architecture, N-Concat, that outperforms the
others in our on-line streaming benchmark. Transformer and n-gram reduced
architectures perform very similarly yet with some important distinct behaviour
in terms of previous context. Overall we obtained up to 4.1 % relative WER
improvement compared to our LSTM baseline, while reducing prediction network
parameters by nearly an order of magnitude (8.4 times).
Related papers
- TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - Does Transformer Interpretability Transfer to RNNs? [0.6437284704257459]
Recent advances in recurrent neural network architectures have enabled RNNs to match or exceed the performance of equal-size transformers.
We show that it is possible to improve some of these techniques by taking advantage of RNNs' compressed state.
arXiv Detail & Related papers (2024-04-09T02:59:17Z) - Set-based Neural Network Encoding Without Weight Tying [91.37161634310819]
We propose a neural network weight encoding method for network property prediction.
Our approach is capable of encoding neural networks in a model zoo of mixed architecture.
We introduce two new tasks for neural network property prediction: cross-dataset and cross-architecture.
arXiv Detail & Related papers (2023-05-26T04:34:28Z) - Predictive Coding Based Multiscale Network with Encoder-Decoder LSTM for
Video Prediction [1.2537993038844142]
We present a multi-scale predictive coding model for future video frames prediction.
Our model employs a multi-scale approach (Coarse to Fine) where the higher level neurons generate coarser predictions (lower resolution)
We propose several improvements to the training strategy to mitigate the accumulation of prediction errors in long-term prediction.
arXiv Detail & Related papers (2022-12-22T12:15:37Z) - VQ-T: RNN Transducers using Vector-Quantized Prediction Network States [52.48566999668521]
We propose to use vector-quantized long short-term memory units in the prediction network of RNN transducers.
By training the discrete representation jointly with the ASR network, hypotheses can be actively merged for lattice generation.
Our experiments on the Switchboard corpus show that the proposed VQ RNN transducers improve ASR performance over transducers with regular prediction networks.
arXiv Detail & Related papers (2022-08-03T02:45:52Z) - Tied & Reduced RNN-T Decoder [0.0]
We study ways to make the RNN-T decoder (prediction network + joint network) smaller and faster without degradation in recognition performance.
Our prediction network performs a simple weighted averaging of the input embeddings, and shares its embedding matrix weights with the joint network's output layer.
This simple design, when used in conjunction with additional Edit-based Minimum Bayes Risk (EMBR) training, reduces the RNN-T Decoder from 23M parameters to just 2M, without affecting word-error rate (WER)
arXiv Detail & Related papers (2021-09-15T18:19:16Z) - A Battle of Network Structures: An Empirical Study of CNN, Transformer,
and MLP [121.35904748477421]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision.
Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and Vision-Mixer, started to lead new trends.
In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons.
arXiv Detail & Related papers (2021-08-30T06:09:02Z) - Compact representations of convolutional neural networks via weight
pruning and quantization [63.417651529192014]
We propose a novel storage format for convolutional neural networks (CNNs) based on source coding and leveraging both weight pruning and quantization.
We achieve a reduction of space occupancy up to 0.6% on fully connected layers and 5.44% on the whole network, while performing at least as competitive as the baseline.
arXiv Detail & Related papers (2021-08-28T20:39:54Z) - Effect of Architectures and Training Methods on the Performance of
Learned Video Frame Prediction [10.404162481860634]
Experimental results show that the residual FCNN architecture performs the best in terms of peak signal to noise ratio (PSNR) at the expense of higher training and test (inference) computational complexity.
The CRNN can be trained stably and very efficiently using the stateful truncated backpropagation through time procedure.
arXiv Detail & Related papers (2020-08-13T20:45:28Z) - The Heterogeneity Hypothesis: Finding Layer-Wise Differentiated Network
Architectures [179.66117325866585]
We investigate a design space that is usually overlooked, i.e. adjusting the channel configurations of predefined networks.
We find that this adjustment can be achieved by shrinking widened baseline networks and leads to superior performance.
Experiments are conducted on various networks and datasets for image classification, visual tracking and image restoration.
arXiv Detail & Related papers (2020-06-29T17:59:26Z) - Stacked Bidirectional and Unidirectional LSTM Recurrent Neural Network
for Forecasting Network-wide Traffic State with Missing Values [23.504633202965376]
We focus on RNN-based models and attempt to reformulate the way to incorporate RNN and its variants into traffic prediction models.
A stacked bidirectional and unidirectional LSTM network architecture (SBU-LSTM) is proposed to assist the design of neural network structures for traffic state forecasting.
We also propose a data imputation mechanism in the LSTM structure (LSTM-I) by designing an imputation unit to infer missing values and assist traffic prediction.
arXiv Detail & Related papers (2020-05-24T00:17:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.