Towards Constituting Mathematical Structures for Learning to Optimize
- URL: http://arxiv.org/abs/2305.18577v1
- Date: Mon, 29 May 2023 19:37:28 GMT
- Title: Towards Constituting Mathematical Structures for Learning to Optimize
- Authors: Jialin Liu, Xiaohan Chen, Zhangyang Wang, Wotao Yin, HanQin Cai
- Abstract summary: A technique that utilizes machine learning to learn an optimization algorithm automatically from data has gained arising attention in recent years.
A generic L2O approach parameterizes the iterative update rule and learns the update direction as a black-box network.
While the generic approach is widely applicable, the learned model can overfit and may not generalize well to out-of-distribution test sets.
We propose a novel L2O model with a mathematics-inspired structure that is broadly applicable and generalized well to out-of-distribution problems.
- Score: 101.80359461134087
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to Optimize (L2O), a technique that utilizes machine learning to
learn an optimization algorithm automatically from data, has gained arising
attention in recent years. A generic L2O approach parameterizes the iterative
update rule and learns the update direction as a black-box network. While the
generic approach is widely applicable, the learned model can overfit and may
not generalize well to out-of-distribution test sets. In this paper, we derive
the basic mathematical conditions that successful update rules commonly
satisfy. Consequently, we propose a novel L2O model with a mathematics-inspired
structure that is broadly applicable and generalized well to
out-of-distribution problems. Numerical simulations validate our theoretical
findings and demonstrate the superior empirical performance of the proposed L2O
model.
Related papers
- LLM4ED: Large Language Models for Automatic Equation Discovery [0.8644909837301149]
We introduce a new framework that utilizes natural language-based prompts to guide large language models in automatically mining governing equations from data.
Specifically, we first utilize the generation capability of LLMs to generate diverse equations in string form, and then evaluate the generated equations based on observations.
Experiments are extensively conducted on both partial differential equations and ordinary differential equations.
arXiv Detail & Related papers (2024-05-13T14:03:49Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: An Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.
We identify two pivotal factors in model parameter learning: update direction and update method.
In particular, we borrow the theoretical framework and learning methods from gradient-based optimization to design improved strategies.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - Adaptive Optimization Algorithms for Machine Learning [0.0]
Machine learning assumes a pivotal role in our data-driven world.
This thesis contributes novel insights, introduces new algorithms with improved convergence guarantees, and improves analyses of popular practical algorithms.
arXiv Detail & Related papers (2023-11-16T21:22:47Z) - M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast
Self-Adaptation [145.7321032755538]
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks.
This paper investigates a potential solution to this open challenge by meta-training an L2O that can perform fast test-time self-adaptation to an out-of-distribution task.
arXiv Detail & Related papers (2023-02-28T19:23:20Z) - Learning to Generalize Provably in Learning to Optimize [185.71326306329678]
Learning to optimize (L2O) has gained increasing popularity, which automates the design of optimizees by data-driven approaches.
Current L2O methods often suffer from poor generalization performance in at least two folds.
We propose to incorporate these two metrics as flatness-aware regularizers into the L2O framework.
arXiv Detail & Related papers (2023-02-22T01:17:31Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.