Applying Cyclical Learning Rate to Neural Machine Translation
- URL: http://arxiv.org/abs/2004.02401v1
- Date: Mon, 6 Apr 2020 04:45:49 GMT
- Title: Applying Cyclical Learning Rate to Neural Machine Translation
- Authors: Choon Meng Lee, Jianfeng Liu, Wei Peng
- Abstract summary: We show how cyclical learning rate can be applied to train transformer-based neural networks for neural machine translation.
We establish guidelines when applying cyclical learning rates to neural machine translation tasks.
- Score: 6.715895949288471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In training deep learning networks, the optimizer and related learning rate
are often used without much thought or with minimal tuning, even though it is
crucial in ensuring a fast convergence to a good quality minimum of the loss
function that can also generalize well on the test dataset. Drawing inspiration
from the successful application of cyclical learning rate policy for computer
vision related convolutional networks and datasets, we explore how cyclical
learning rate can be applied to train transformer-based neural networks for
neural machine translation. From our carefully designed experiments, we show
that the choice of optimizers and the associated cyclical learning rate policy
can have a significant impact on the performance. In addition, we establish
guidelines when applying cyclical learning rates to neural machine translation
tasks. Thus with our work, we hope to raise awareness of the importance of
selecting the right optimizers and the accompanying learning rate policy, at
the same time, encourage further research into easy-to-use learning rate
policies.
Related papers
- A Unified Framework for Neural Computation and Learning Over Time [56.44910327178975]
Hamiltonian Learning is a novel unified framework for learning with neural networks "over time"
It is based on differential equations that: (i) can be integrated without the need of external software solvers; (ii) generalize the well-established notion of gradient-based learning in feed-forward and recurrent networks; (iii) open to novel perspectives.
arXiv Detail & Related papers (2024-09-18T14:57:13Z) - Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron [3.069335774032178]
We use a dataset-process approach to derive flow equations describing learning.
We characterize the effects of the learning rule (supervised or reinforcement learning, SL/RL) and input-data distribution on the perceptron's learning curve.
This approach points a way toward analyzing learning dynamics for more-complex circuit architectures.
arXiv Detail & Related papers (2024-09-05T17:58:28Z) - Normalization and effective learning rates in reinforcement learning [52.59508428613934]
Normalization layers have recently experienced a renaissance in the deep reinforcement learning and continual learning literature.
We show that normalization brings with it a subtle but important side effect: an equivalence between growth in the norm of the network parameters and decay in the effective learning rate.
We propose to make the learning rate schedule explicit with a simple re- parameterization which we call Normalize-and-Project.
arXiv Detail & Related papers (2024-07-01T20:58:01Z) - Meta-Learning Strategies through Value Maximization in Neural Networks [7.285835869818669]
We present a learning effort framework capable of efficiently optimizing control signals on a fully normative objective.
We apply this framework to investigate the effect of approximations in common meta-learning algorithms.
Across settings, we find that control effort is most beneficial when applied to easier aspects of a task early in learning.
arXiv Detail & Related papers (2023-10-30T18:29:26Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Minimizing Control for Credit Assignment with Strong Feedback [65.59995261310529]
Current methods for gradient-based credit assignment in deep neural networks need infinitesimally small feedback signals.
We combine strong feedback influences on neural activity with gradient-based learning and show that this naturally leads to a novel view on neural network optimization.
We show that the use of strong feedback in DFC allows learning forward and feedback connections simultaneously, using a learning rule fully local in space and time.
arXiv Detail & Related papers (2022-04-14T22:06:21Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - AutoLR: An Evolutionary Approach to Learning Rate Policies [2.3577368017815705]
This work presents AutoLR, a framework that evolves Learning Rate Schedulers for a specific Neural Network Architecture.
Results show that training performed using certain evolved policies is more efficient than the established baseline.
arXiv Detail & Related papers (2020-07-08T16:03:44Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z) - Inter- and Intra-domain Knowledge Transfer for Related Tasks in Deep
Character Recognition [2.320417845168326]
Pre-training a deep neural network on the ImageNet dataset is a common practice for training deep learning models.
The technique of pre-training on one task and then retraining on a new one is called transfer learning.
In this paper we analyse the effectiveness of using deep transfer learning for character recognition tasks.
arXiv Detail & Related papers (2020-01-02T14:18:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.