Multi-mode Transformer Transducer with Stochastic Future Context
- URL: http://arxiv.org/abs/2106.09760v1
- Date: Thu, 17 Jun 2021 18:42:11 GMT
- Title: Multi-mode Transformer Transducer with Stochastic Future Context
- Authors: Kwangyoun Kim, Felix Wu, Prashant Sridhar, Kyu J. Han, Shinji Watanabe
- Abstract summary: Multi-mode speech recognition models can process longer future context to achieve higher accuracy and when a latency budget is not flexible, the model can still achieve reliable accuracy.
We show that a Multi-mode ASR model rivals, if not surpasses, a set of competitive streaming baselines trained with different latency budgets.
- Score: 53.005638503544866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automatic speech recognition (ASR) models make fewer errors when more
surrounding speech information is presented as context. Unfortunately,
acquiring a larger future context leads to higher latency. There exists an
inevitable trade-off between speed and accuracy. Naively, to fit different
latency requirements, people have to store multiple models and pick the best
one under the constraints. Instead, a more desirable approach is to have a
single model that can dynamically adjust its latency based on different
constraints, which we refer to as Multi-mode ASR. A Multi-mode ASR model can
fulfill various latency requirements during inference -- when a larger latency
becomes acceptable, the model can process longer future context to achieve
higher accuracy and when a latency budget is not flexible, the model can be
less dependent on future context but still achieve reliable accuracy. In
pursuit of Multi-mode ASR, we propose Stochastic Future Context, a simple
training procedure that samples one streaming configuration in each iteration.
Through extensive experiments on AISHELL-1 and LibriSpeech datasets, we show
that a Multi-mode ASR model rivals, if not surpasses, a set of competitive
streaming baselines trained with different latency budgets.
Related papers
- Robust Predictions with Ambiguous Time Delays: A Bootstrap Strategy [5.71557730775514]
Time Series Model Bootstrap (TSMB) is a versatile framework designed to handle potentially varying or even nondeterministic time delays in time series modeling.
TSMB significantly bolsters the performance of models that are trained and make predictions using this framework, making it highly suitable for a wide range of dynamic and interconnected data environments.
arXiv Detail & Related papers (2024-08-23T02:38:20Z) - Online Resource Allocation for Edge Intelligence with Colocated Model Retraining and Inference [5.6679198251041765]
We introduce an online approximation algorithm, named ORRIC, designed to optimize resource allocation for adaptively balancing accuracy of training model and inference.
The competitive ratio of ORRIC outperforms that of the traditional In-ference-Only paradigm, especially when data persists for a sufficiently lengthy time.
arXiv Detail & Related papers (2024-05-25T03:05:19Z) - TSLANet: Rethinking Transformers for Time Series Representation Learning [19.795353886621715]
Time series data is characterized by its intrinsic long and short-range dependencies.
We introduce a novel Time Series Lightweight Network (TSLANet) as a universal convolutional model for diverse time series tasks.
Our experiments demonstrate that TSLANet outperforms state-of-the-art models in various tasks spanning classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2024-04-12T13:41:29Z) - TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models [52.454274602380124]
Diffusion models heavily depend on the time-step $t$ to achieve satisfactory multi-round denoising.
We propose a Temporal Feature Maintenance Quantization (TFMQ) framework building upon a Temporal Information Block.
Powered by the pioneering block design, we devise temporal information aware reconstruction (TIAR) and finite set calibration (FSC) to align the full-precision temporal features.
arXiv Detail & Related papers (2023-11-27T12:59:52Z) - MultiModN- Multimodal, Multi-Task, Interpretable Modular Networks [31.59812777504438]
We present MultiModN, a network that fuses latent representations in a sequence of any number, combination, or type of modality.
We show that MultiModN's sequential MM fusion does not compromise performance compared with a baseline of parallel fusion.
arXiv Detail & Related papers (2023-09-25T13:16:57Z) - Parameter-efficient Tuning of Large-scale Multimodal Foundation Model [68.24510810095802]
We propose A graceful prompt framework for cross-modal transfer (Aurora) to overcome these challenges.
Considering the redundancy in existing architectures, we first utilize the mode approximation to generate 0.1M trainable parameters to implement the multimodal prompt tuning.
A thorough evaluation on six cross-modal benchmarks shows that it not only outperforms the state-of-the-art but even outperforms the full fine-tuning approach.
arXiv Detail & Related papers (2023-05-15T06:40:56Z) - Multi-scale Attention Flow for Probabilistic Time Series Forecasting [68.20798558048678]
We propose a novel non-autoregressive deep learning model, called Multi-scale Attention Normalizing Flow(MANF)
Our model avoids the influence of cumulative error and does not increase the time complexity.
Our model achieves state-of-the-art performance on many popular multivariate datasets.
arXiv Detail & Related papers (2022-05-16T07:53:42Z) - TMS: A Temporal Multi-scale Backbone Design for Speaker Embedding [60.292702363839716]
Current SOTA backbone networks for speaker embedding are designed to aggregate multi-scale features from an utterance with multi-branch network architectures for speaker representation.
We propose an effective temporal multi-scale (TMS) model where multi-scale branches could be efficiently designed in a speaker embedding network almost without increasing computational costs.
arXiv Detail & Related papers (2022-03-17T05:49:35Z) - Streaming end-to-end multi-talker speech recognition [34.76106500736099]
We propose the Streaming Unmixing and Recognition Transducer (SURT) for end-to-end multi-talker speech recognition.
Our model employs the Recurrent Neural Network Transducer (RNN-T) as the backbone that can meet various latency constraints.
Based on experiments on the publicly available LibriSpeechMix dataset, we show that HEAT can achieve better accuracy compared with PIT.
arXiv Detail & Related papers (2020-11-26T06:28:04Z) - A Streaming On-Device End-to-End Model Surpassing Server-Side
Conventional Model Quality and Latency [88.08721721440429]
We develop a first-pass Recurrent Neural Network Transducer (RNN-T) model and a second-pass Listen, Attend, Spell (LAS) rescorer.
We find that RNN-T+LAS offers a better WER and latency tradeoff compared to a conventional model.
arXiv Detail & Related papers (2020-03-28T05:00:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.