Training Stronger Baselines for Learning to Optimize
- URL: http://arxiv.org/abs/2010.09089v1
- Date: Sun, 18 Oct 2020 20:05:48 GMT
- Title: Training Stronger Baselines for Learning to Optimize
- Authors: Tianlong Chen, Weiyi Zhang, Jingyang Zhou, Shiyu Chang, Sijia Liu,
Lisa Amini, Zhangyang Wang
- Abstract summary: We show that even the simplest L2O model could have been trained much better.
We leverage off-policy imitation learning to guide the L2O learning, by taking reference to the behavior of analyticals.
Our improved training techniques are plugged into a variety of state-of-the-art L2O models, and immediately boost their performance.
- Score: 119.35557905664832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize (L2O) has gained increasing attention since classical
optimizers require laborious problem-specific design and hyperparameter tuning.
However, there is a gap between the practical demand and the achievable
performance of existing L2O models. Specifically, those learned optimizers are
applicable to only a limited class of problems, and often exhibit instability.
With many efforts devoted to designing more sophisticated L2O models, we argue
for another orthogonal, under-explored theme: the training techniques for those
L2O models. We show that even the simplest L2O model could have been trained
much better. We first present a progressive training scheme to gradually
increase the optimizer unroll length, to mitigate a well-known L2O dilemma of
truncation bias (shorter unrolling) versus gradient explosion (longer
unrolling). We further leverage off-policy imitation learning to guide the L2O
learning, by taking reference to the behavior of analytical optimizers. Our
improved training techniques are plugged into a variety of state-of-the-art L2O
models, and immediately boost their performance, without making any change to
their model structures. Especially, by our proposed techniques, an earliest and
simplest L2O model can be trained to outperform the latest complicated L2O
models on a number of tasks. Our results demonstrate a greater potential of L2O
yet to be unleashed, and urge to rethink the recent progress. Our codes are
publicly available at: https://github.com/VITA-Group/L2O-Training-Techniques.
Related papers
- Towards Constituting Mathematical Structures for Learning to Optimize [101.80359461134087]
A technique that utilizes machine learning to learn an optimization algorithm automatically from data has gained arising attention in recent years.
A generic L2O approach parameterizes the iterative update rule and learns the update direction as a black-box network.
While the generic approach is widely applicable, the learned model can overfit and may not generalize well to out-of-distribution test sets.
We propose a novel L2O model with a mathematics-inspired structure that is broadly applicable and generalized well to out-of-distribution problems.
arXiv Detail & Related papers (2023-05-29T19:37:28Z) - M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast
Self-Adaptation [145.7321032755538]
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks.
This paper investigates a potential solution to this open challenge by meta-training an L2O that can perform fast test-time self-adaptation to an out-of-distribution task.
arXiv Detail & Related papers (2023-02-28T19:23:20Z) - Learning to Generalize Provably in Learning to Optimize [185.71326306329678]
Learning to optimize (L2O) has gained increasing popularity, which automates the design of optimizees by data-driven approaches.
Current L2O methods often suffer from poor generalization performance in at least two folds.
We propose to incorporate these two metrics as flatness-aware regularizers into the L2O framework.
arXiv Detail & Related papers (2023-02-22T01:17:31Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - A Simple Guard for Learned Optimizers [0.0]
We propose a new class of Safeguarded L2O, called Loss-Guarded L2O (LGL2O)
Safeguarded L2O can take a learned algorithm and safeguard it with a generic learning algorithm so that by conditionally switching between the two, the resulting algorithm is provably convergent.
We show theoretical proof of LGL2O's convergence guarantee and empirical results comparing to GL2O.
arXiv Detail & Related papers (2022-01-28T21:32:28Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.