Exploring Transformers for Large-Scale Speech Recognition
- URL: http://arxiv.org/abs/2005.09684v2
- Date: Tue, 11 Aug 2020 18:51:37 GMT
- Title: Exploring Transformers for Large-Scale Speech Recognition
- Authors: Liang Lu, Changliang Liu, Jinyu Li and Yifan Gong
- Abstract summary: We show that Transformers can achieve around 6% relative word error rate (WER) reduction compared to the BLSTM baseline in the offline fashion.
In the streaming fashion, Transformer-XL is comparable to LC-BLSTM with 800 millisecond latency constraint.
- Score: 34.645597506707055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While recurrent neural networks still largely define state-of-the-art speech
recognition systems, the Transformer network has been proven to be a
competitive alternative, especially in the offline condition. Most studies with
Transformers have been constrained in a relatively small scale setting, and
some forms of data argumentation approaches are usually applied to combat the
data sparsity issue. In this paper, we aim at understanding the behaviors of
Transformers in the large-scale speech recognition setting, where we have used
around 65,000 hours of training data. We investigated various aspects on
scaling up Transformers, including model initialization, warmup training as
well as different Layer Normalization strategies. In the streaming condition,
we compared the widely used attention mask based future context lookahead
approach to the Transformer-XL network. From our experiments, we show that
Transformers can achieve around 6% relative word error rate (WER) reduction
compared to the BLSTM baseline in the offline fashion, while in the streaming
fashion, Transformer-XL is comparable to LC-BLSTM with 800 millisecond latency
constraint.
Related papers
- Error Correction Code Transformer [92.10654749898927]
We propose to extend for the first time the Transformer architecture to the soft decoding of linear codes at arbitrary block lengths.
We encode each channel's output dimension to high dimension for better representation of the bits information to be processed separately.
The proposed approach demonstrates the extreme power and flexibility of Transformers and outperforms existing state-of-the-art neural decoders by large margins at a fraction of their time complexity.
arXiv Detail & Related papers (2022-03-27T15:25:58Z) - Sparse is Enough in Scaling Transformers [12.561317511514469]
Large Transformer models yield impressive results on many tasks, but are expensive to train, or even fine-tune, and so slow at decoding that their use and study becomes out of reach.
We propose Scaling Transformers, a family of next generation Transformer models that use sparse layers to scale efficiently and perform unbatched decoding much faster than the standard Transformer.
arXiv Detail & Related papers (2021-11-24T19:53:46Z) - Scalable Transformers for Neural Machine Translation [86.4530299266897]
Transformer has been widely adopted in Neural Machine Translation (NMT) because of its large capacity and parallel training of sequence generation.
We propose a novel scalable Transformers, which naturally contains sub-Transformers of different scales and have shared parameters.
A three-stage training scheme is proposed to tackle the difficulty of training the scalable Transformers.
arXiv Detail & Related papers (2021-06-04T04:04:10Z) - Spatiotemporal Transformer for Video-based Person Re-identification [102.58619642363958]
We show that, despite the strong learning ability, the vanilla Transformer suffers from an increased risk of over-fitting.
We propose a novel pipeline where the model is pre-trained on a set of synthesized video data and then transferred to the downstream domains.
The derived algorithm achieves significant accuracy gain on three popular video-based person re-identification benchmarks.
arXiv Detail & Related papers (2021-03-30T16:19:27Z) - Wake Word Detection with Streaming Transformers [72.66551640048405]
We show that our proposed Transformer model outperforms the baseline convolution network by 25% on average in false rejection rate at the same false alarm rate.
Our experiments on the Mobvoi wake word dataset demonstrate that our proposed Transformer model outperforms the baseline convolution network by 25%.
arXiv Detail & Related papers (2021-02-08T19:14:32Z) - Parameter Efficient Multimodal Transformers for Video Representation
Learning [108.8517364784009]
This work focuses on reducing the parameters of multimodal Transformers in the context of audio-visual video representation learning.
We show that our approach reduces parameters up to 80$%$, allowing us to train our model end-to-end from scratch.
To demonstrate our approach, we pretrain our model on 30-second clips from Kinetics-700 and transfer it to audio-visual classification tasks.
arXiv Detail & Related papers (2020-12-08T00:16:13Z) - Developing Real-time Streaming Transformer Transducer for Speech
Recognition on Large-scale Dataset [37.619200507404145]
Transformer Transducer (T-T) models for the fist pass decoding with low latency and fast speed on a large-scale dataset.
We combine the idea of Transformer-XL and chunk-wise streaming processing to design a streamable Transformer Transducer model.
We demonstrate that T-T outperforms the hybrid model, RNN Transducer (RNN-T), and streamable Transformer attention-based encoder-decoder model in the streaming scenario.
arXiv Detail & Related papers (2020-10-22T03:01:21Z) - Applying the Transformer to Character-level Transduction [68.91664610425114]
The transformer has been shown to outperform recurrent neural network-based sequence-to-sequence models in various word-level NLP tasks.
We show that with a large enough batch size, the transformer does indeed outperform recurrent models for character-level tasks.
arXiv Detail & Related papers (2020-05-20T17:25:43Z) - Transformer Networks for Trajectory Forecasting [11.802437934289062]
We propose the novel use of Transformer Networks for trajectory forecasting.
This is a fundamental switch from the sequential step-by-step processing of LSTMs to the only-attention-based memory mechanisms of Transformers.
arXiv Detail & Related papers (2020-03-18T09:17:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.