Abstract: State-of-the-art neural language models (LMs) represented by Transformers are
highly complex. Their use of fixed, deterministic parameter estimates fail to
account for model uncertainty and lead to over-fitting and poor generalization
when given limited training data. In order to address these issues, this paper
proposes a full Bayesian learning framework for Transformer LM estimation.
Efficient variational inference based approaches are used to estimate the
latent parameter posterior distributions associated with different parts of the
Transformer model architecture including multi-head self-attention, feed
forward and embedding layers. Statistically significant word error rate (WER)
reductions up to 0.5\% absolute (3.18\% relative) and consistent perplexity
gains were obtained over the baseline Transformer LMs on state-of-the-art
Switchboard corpus trained LF-MMI factored TDNN systems with i-Vector speaker
adaptation. Performance improvements were also obtained on a cross domain LM
adaptation task requiring porting a Transformer LM trained on the Switchboard
and Fisher data to a low-resource DementiaBank elderly speech corpus.