Learning to Optimize: A Primer and A Benchmark
- URL: http://arxiv.org/abs/2103.12828v1
- Date: Tue, 23 Mar 2021 20:46:20 GMT
- Title: Learning to Optimize: A Primer and A Benchmark
- Authors: Tianlong Chen, Xiaohan Chen, Wuyang Chen, Howard Heaton, Jialin Liu,
Zhangyang Wang, Wotao Yin
- Abstract summary: Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
- Score: 94.29436694770953
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning to optimize (L2O) is an emerging approach that leverages machine
learning to develop optimization methods, aiming at reducing the laborious
iterations of hand engineering. It automates the design of an optimization
method based on its performance on a set of training problems. This data-driven
procedure generates methods that can efficiently solve problems similar to
those in the training. In sharp contrast, the typical and traditional designs
of optimization methods are theory-driven, so they obtain performance
guarantees over the classes of problems specified by the theory. The difference
makes L2O suitable for repeatedly solving a certain type of optimization
problems over a specific distribution of data, while it typically fails on
out-of-distribution problems. The practicality of L2O depends on the type of
target optimization, the chosen architecture of the method to learn, and the
training procedure. This new paradigm has motivated a community of researchers
to explore L2O and report their findings.
This article is poised to be the first comprehensive survey and benchmark of
L2O for continuous optimization. We set up taxonomies, categorize existing
works and research directions, present insights, and identify open challenges.
We also benchmarked many existing L2O approaches on a few but representative
optimization problems. For reproducible research and fair benchmarking
purposes, we released our software implementation and data in the package
Open-L2O at https://github.com/VITA-Group/Open-L2O.
Related papers
- Iterative or Innovative? A Problem-Oriented Perspective for Code Optimization [81.88668100203913]
Large language models (LLMs) have demonstrated strong capabilities in solving a wide range of programming tasks.
In this paper, we explore code optimization with a focus on performance enhancement, specifically aiming to optimize code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Learning to optimize: A tutorial for continuous and mixed-integer optimization [41.29549467082292]
Learning to Optimize (L2O) stands at the intersection of traditional optimization and machine learning.
This tutorial dives deep into L2O techniques, introducing how to accelerate optimization algorithms, promptly estimate the solutions, or even reshape the optimization problem itself.
arXiv Detail & Related papers (2024-05-24T06:21:01Z) - M-L2O: Towards Generalizable Learning-to-Optimize by Test-Time Fast
Self-Adaptation [145.7321032755538]
Learning to Optimize (L2O) has drawn increasing attention as it often remarkably accelerates the optimization procedure of complex tasks.
This paper investigates a potential solution to this open challenge by meta-training an L2O that can perform fast test-time self-adaptation to an out-of-distribution task.
arXiv Detail & Related papers (2023-02-28T19:23:20Z) - Learning to Generalize Provably in Learning to Optimize [185.71326306329678]
Learning to optimize (L2O) has gained increasing popularity, which automates the design of optimizees by data-driven approaches.
Current L2O methods often suffer from poor generalization performance in at least two folds.
We propose to incorporate these two metrics as flatness-aware regularizers into the L2O framework.
arXiv Detail & Related papers (2023-02-22T01:17:31Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.