Learning to Generalize Provably in Learning to Optimize
- URL: http://arxiv.org/abs/2302.11085v2
- Date: Tue, 28 Mar 2023 17:57:05 GMT
- Title: Learning to Generalize Provably in Learning to Optimize
- Authors: Junjie Yang, Tianlong Chen, Mingkang Zhu, Fengxiang He, Dacheng Tao,
Yingbin Liang, Zhangyang Wang
- Abstract summary: Learning to optimize (L2O) has gained increasing popularity, which automates the design of optimizees by data-driven approaches.
Current L2O methods often suffer from poor generalization performance in at least two folds.
We propose to incorporate these two metrics as flatness-aware regularizers into the L2O framework.
- Score: 185.71326306329678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize (L2O) has gained increasing popularity, which automates
the design of optimizers by data-driven approaches. However, current L2O
methods often suffer from poor generalization performance in at least two
folds: (i) applying the L2O-learned optimizer to unseen optimizees, in terms of
lowering their loss function values (optimizer generalization, or
``generalizable learning of optimizers"); and (ii) the test performance of an
optimizee (itself as a machine learning model), trained by the optimizer, in
terms of the accuracy over unseen data (optimizee generalization, or ``learning
to generalize"). While the optimizer generalization has been recently studied,
the optimizee generalization (or learning to generalize) has not been
rigorously studied in the L2O context, which is the aim of this paper. We first
theoretically establish an implicit connection between the local entropy and
the Hessian, and hence unify their roles in the handcrafted design of
generalizable optimizers as equivalent metrics of the landscape flatness of
loss functions. We then propose to incorporate these two metrics as
flatness-aware regularizers into the L2O framework in order to meta-train
optimizers to learn to generalize, and theoretically show that such
generalization ability can be learned during the L2O meta-training process and
then transformed to the optimizee loss function. Extensive experiments
consistently validate the effectiveness of our proposals with substantially
improved generalization on multiple sophisticated L2O models and diverse
optimizees. Our code is available at:
https://github.com/VITA-Group/Open-L2O/tree/main/Model_Free_L2O/L2O-Entropy.
Related papers
- Two Optimizers Are Better Than One: LLM Catalyst Empowers Gradient-Based Optimization for Prompt Tuning [69.95292905263393]
We show that gradient-based optimization and large language models (MsLL) are complementary to each other, suggesting a collaborative optimization approach.
Our code is released at https://www.guozix.com/guozix/LLM-catalyst.
arXiv Detail & Related papers (2024-05-30T06:24:14Z) - Data-Driven Performance Guarantees for Classical and Learned Optimizers [2.0403774954994858]
We introduce a data-driven approach to analyze the performance of continuous optimization algorithms.
We study classical and learneds to solve families of parametric optimization problems.
arXiv Detail & Related papers (2024-04-22T02:06:35Z) - Towards Constituting Mathematical Structures for Learning to Optimize [101.80359461134087]
A technique that utilizes machine learning to learn an optimization algorithm automatically from data has gained arising attention in recent years.
A generic L2O approach parameterizes the iterative update rule and learns the update direction as a black-box network.
While the generic approach is widely applicable, the learned model can overfit and may not generalize well to out-of-distribution test sets.
We propose a novel L2O model with a mathematics-inspired structure that is broadly applicable and generalized well to out-of-distribution problems.
arXiv Detail & Related papers (2023-05-29T19:37:28Z) - M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast
Self-Adaptation [145.7321032755538]
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks.
This paper investigates a potential solution to this open challenge by meta-training an L2O that can perform fast test-time self-adaptation to an out-of-distribution task.
arXiv Detail & Related papers (2023-02-28T19:23:20Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.