Dynamically Adjusting Transformer Batch Size by Monitoring Gradient
Direction Change
- URL: http://arxiv.org/abs/2005.02008v1
- Date: Tue, 5 May 2020 08:47:34 GMT
- Title: Dynamically Adjusting Transformer Batch Size by Monitoring Gradient
Direction Change
- Authors: Hongfei Xu and Josef van Genabith and Deyi Xiong and Qiuhui Liu
- Abstract summary: We analyze how increasing batch size affects gradient direction.
We propose to evaluate the stability of gradients with their angle change.
Our approach dynamically determines proper and efficient batch sizes during training.
- Score: 69.40942736249397
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The choice of hyper-parameters affects the performance of neural models.
While much previous research (Sutskever et al., 2013; Duchi et al., 2011;
Kingma and Ba, 2015) focuses on accelerating convergence and reducing the
effects of the learning rate, comparatively few papers concentrate on the
effect of batch size. In this paper, we analyze how increasing batch size
affects gradient direction, and propose to evaluate the stability of gradients
with their angle change. Based on our observations, the angle change of
gradient direction first tends to stabilize (i.e. gradually decrease) while
accumulating mini-batches, and then starts to fluctuate. We propose to
automatically and dynamically determine batch sizes by accumulating gradients
of mini-batches and performing an optimization step at just the time when the
direction of gradients starts to fluctuate. To improve the efficiency of our
approach for large models, we propose a sampling approach to select gradients
of parameters sensitive to the batch size. Our approach dynamically determines
proper and efficient batch sizes during training. In our experiments on the WMT
14 English to German and English to French tasks, our approach improves the
Transformer with a fixed 25k batch size by +0.73 and +0.82 BLEU respectively.
Related papers
- Fisher-Orthogonal Projection Methods for Natural Gradient Descent with Large Batches [0.0]
We introduce Fisher-Orthogonal Projection (FOP), a technique that restores the effectiveness of the second-order method at very large batch sizes.<n>FOP constructs a variance-aware update direction by leveraging from two sub-batches, enhancing the average gradient with a component of the gradient difference.
arXiv Detail & Related papers (2025-08-19T15:02:22Z) - Discrete error dynamics of mini-batch gradient descent for least squares regression [4.159762735751163]
We study the dynamics of mini-batch gradient descent for at least squares when sampling without replacement.
We also study discretization effects that a continuous-time gradient flow analysis cannot detect, and show that minibatch gradient descent converges to a step-size dependent solution.
arXiv Detail & Related papers (2024-06-06T02:26:14Z) - Surge Phenomenon in Optimal Learning Rate and Batch Size Scaling [27.058009599819012]
We study the connection between optimal learning rates and batch sizes for Adam styles.
We prove that the optimal learning rate first rises and then falls as the batch size increases.
arXiv Detail & Related papers (2024-05-23T13:52:36Z) - ELRA: Exponential learning rate adaption gradient descent optimization
method [83.88591755871734]
We present a novel, fast (exponential rate), ab initio (hyper-free) gradient based adaption.
The main idea of the method is to adapt the $alpha by situational awareness.
It can be applied to problems of any dimensions n and scales only linearly.
arXiv Detail & Related papers (2023-09-12T14:36:13Z) - Step-size Adaptation Using Exponentiated Gradient Updates [21.162404996362948]
We show that augmenting a given with an adaptive tuning method of the step-size greatly improves the performance.
We maintain a global step-size scale for the update as well as a gain factor for each coordinate.
We show that our approach can achieve compelling accuracy on standard models without using any specially tuned learning rate schedule.
arXiv Detail & Related papers (2022-01-31T23:17:08Z) - Adapting Stepsizes by Momentumized Gradients Improves Optimization and
Generalization [89.66571637204012]
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
textscAdaMomentum on vision, and achieves state-the-art results consistently on other tasks including language processing.
arXiv Detail & Related papers (2021-06-22T03:13:23Z) - Decreasing scaling transition from adaptive gradient descent to
stochastic gradient descent [1.7874193862154875]
We propose a decreasing scaling transition from adaptive gradient descent to gradient descent method DSTAda.
Our experimental results show that DSTAda has a faster speed, higher accuracy, and better stability and robustness.
arXiv Detail & Related papers (2021-06-12T11:28:58Z) - Self-Tuning Stochastic Optimization with Curvature-Aware Gradient
Filtering [53.523517926927894]
We explore the use of exact per-sample Hessian-vector products and gradients to construct self-tuning quadratics.
We prove that our model-based procedure converges in noisy gradient setting.
This is an interesting step for constructing self-tuning quadratics.
arXiv Detail & Related papers (2020-11-09T22:07:30Z) - Adaptive Gradient Method with Resilience and Momentum [120.83046824742455]
We propose an Adaptive Gradient Method with Resilience and Momentum (AdaRem)
AdaRem adjusts the parameter-wise learning rate according to whether the direction of one parameter changes in the past is aligned with the direction of the current gradient.
Our method outperforms previous adaptive learning rate-based algorithms in terms of the training speed and the test error.
arXiv Detail & Related papers (2020-10-21T14:49:00Z) - On the Generalization Benefit of Noise in Stochastic Gradient Descent [34.127525925676416]
It has long been argued that minibatch gradient descent can generalize better than large batch gradient descent in deep neural networks.
We show that small or moderately large batch sizes can substantially outperform very large batches on the test set.
arXiv Detail & Related papers (2020-06-26T16:18:54Z) - AdamP: Slowing Down the Slowdown for Momentum Optimizers on
Scale-invariant Weights [53.8489656709356]
Normalization techniques are a boon for modern deep learning.
It is often overlooked, however, that the additional introduction of momentum results in a rapid reduction in effective step sizes for scale-invariant weights.
In this paper, we verify that the widely-adopted combination of the two ingredients lead to the premature decay of effective step sizes and sub-optimal model performances.
arXiv Detail & Related papers (2020-06-15T08:35:15Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.