PowerNorm: Rethinking Batch Normalization in Transformers
- URL: http://arxiv.org/abs/2003.07845v2
- Date: Sun, 28 Jun 2020 07:12:51 GMT
- Title: PowerNorm: Rethinking Batch Normalization in Transformers
- Authors: Sheng Shen, Zhewei Yao, Amir Gholami, Michael W. Mahoney, Kurt Keutzer
- Abstract summary: normalization method for neural network (NN) models used in Natural Language Processing (NLP) is layer normalization (LN)
LN is preferred due to the empirical observation that a (naive/vanilla) use of BN leads to significant performance degradation for NLP tasks.
We propose Power Normalization (PN), a novel normalization scheme that resolves this issue.
- Score: 96.14956636022957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The standard normalization method for neural network (NN) models used in
Natural Language Processing (NLP) is layer normalization (LN). This is
different than batch normalization (BN), which is widely-adopted in Computer
Vision. The preferred use of LN in NLP is principally due to the empirical
observation that a (naive/vanilla) use of BN leads to significant performance
degradation for NLP tasks; however, a thorough understanding of the underlying
reasons for this is not always evident. In this paper, we perform a systematic
study of NLP transformer models to understand why BN has a poor performance, as
compared to LN. We find that the statistics of NLP data across the batch
dimension exhibit large fluctuations throughout training. This results in
instability, if BN is naively implemented. To address this, we propose Power
Normalization (PN), a novel normalization scheme that resolves this issue by
(i) relaxing zero-mean normalization in BN, (ii) incorporating a running
quadratic mean instead of per batch statistics to stabilize fluctuations, and
(iii) using an approximate backpropagation for incorporating the running
statistics in the forward pass. We show theoretically, under mild assumptions,
that PN leads to a smaller Lipschitz constant for the loss, compared with BN.
Furthermore, we prove that the approximate backpropagation scheme leads to
bounded gradients. We extensively test PN for transformers on a range of NLP
tasks, and we show that it significantly outperforms both LN and BN. In
particular, PN outperforms LN by 0.4/0.6 BLEU on IWSLT14/WMT14 and 5.6/3.0 PPL
on PTB/WikiText-103. We make our code publicly available at
\url{https://github.com/sIncerass/powernorm}.
Related papers
- Overcoming Recency Bias of Normalization Statistics in Continual
Learning: Balance and Adaptation [67.77048565738728]
Continual learning involves learning a sequence of tasks and balancing their knowledge appropriately.
We propose Adaptive Balance of BN (AdaB$2$N), which incorporates appropriately a Bayesian-based strategy to adapt task-wise contributions.
Our approach achieves significant performance gains across a wide range of benchmarks.
arXiv Detail & Related papers (2023-10-13T04:50:40Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Understanding the Failure of Batch Normalization for Transformers in NLP [16.476194435004732]
Batch Normalization (BN) is a technique to accelerate the training of deep neural networks.
BN fails to defend its position in Natural Language Processing (NLP), which is dominated by Layer Normalization (LN)
Regularized BN (RBN) improves the performance of BN consistently and outperforms or is on par with LN on 17 out of 20 settings.
arXiv Detail & Related papers (2022-10-11T05:18:47Z) - Unified Normalization for Accelerating and Stabilizing Transformers [35.07454490355906]
Layer Normalization (LN) normalizes activations within each token to boost robustness.
LN requires on-the-fly statistics calculation in inference as well as division and square root operations.
We propose Unified Normalization (UN), which can speed up the inference by being fused with other linear operations.
arXiv Detail & Related papers (2022-08-02T08:41:31Z) - Batch Normalization Preconditioning for Neural Network Training [7.709342743709842]
Batch normalization (BN) is a popular and ubiquitous method in deep learning.
BN is not suitable for use with very small mini-batch sizes or online learning.
We propose a new method called Batch Normalization Preconditioning (BNP)
arXiv Detail & Related papers (2021-08-02T18:17:26Z) - MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch
Normalization [60.36100335878855]
We propose a novel normalization method, named MimicNorm, to improve the convergence and efficiency in network training.
We leverage the neural kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer.
MimicNorm achieves similar accuracy for various network structures, including ResNets and lightweight networks like ShuffleNet, with a reduction of about 20% memory consumption.
arXiv Detail & Related papers (2020-10-19T07:42:41Z) - Double Forward Propagation for Memorized Batch Normalization [68.34268180871416]
Batch Normalization (BN) has been a standard component in designing deep neural networks (DNNs)
We propose a memorized batch normalization (MBN) which considers multiple recent batches to obtain more accurate and robust statistics.
Compared to related methods, the proposed MBN exhibits consistent behaviors in both training and inference.
arXiv Detail & Related papers (2020-10-10T08:48:41Z) - Towards Stabilizing Batch Statistics in Backward Propagation of Batch
Normalization [126.6252371899064]
Moving Average Batch Normalization (MABN) is a novel normalization method.
We show that MABN can completely restore the performance of vanilla BN in small batch cases.
Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.
arXiv Detail & Related papers (2020-01-19T14:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.