Symbol: Generating Flexible Black-Box Optimizers through Symbolic
Equation Learning
- URL: http://arxiv.org/abs/2402.02355v2
- Date: Wed, 7 Feb 2024 02:38:52 GMT
- Title: Symbol: Generating Flexible Black-Box Optimizers through Symbolic
Equation Learning
- Authors: Jiacheng Chen, Zeyuan Ma, Hongshu Guo, Yining Ma, Jie Zhang, Yue-Jiao
Gong
- Abstract summary: We present textscSymbol, a framework that promotes the automated discovery of black-boxs through symbolic equation learning.
Specifically, we propose a Symbolic Equation Generator (SEG) that allows closed-form optimization rules to be dynamically generated.
Extensive experiments reveal that the generalizations generated by textscSymbol not only surpass the state-of-the-art BBO and MetaBBO baselines, but also exhibit exceptional zero-shot abilities.
- Score: 16.338146844605404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent Meta-learning for Black-Box Optimization (MetaBBO) methods harness
neural networks to meta-learn configurations of traditional black-box
optimizers. Despite their success, they are inevitably restricted by the
limitations of predefined hand-crafted optimizers. In this paper, we present
\textsc{Symbol}, a novel framework that promotes the automated discovery of
black-box optimizers through symbolic equation learning. Specifically, we
propose a Symbolic Equation Generator (SEG) that allows closed-form
optimization rules to be dynamically generated for specific tasks and
optimization steps. Within \textsc{Symbol}, we then develop three distinct
strategies based on reinforcement learning, so as to meta-learn the SEG
efficiently. Extensive experiments reveal that the optimizers generated by
\textsc{Symbol} not only surpass the state-of-the-art BBO and MetaBBO
baselines, but also exhibit exceptional zero-shot generalization abilities
across entirely unseen tasks with different problem dimensions, population
sizes, and optimization horizons. Furthermore, we conduct in-depth analyses of
our \textsc{Symbol} framework and the optimization rules that it generates,
underscoring its desirable flexibility and interpretability.
Related papers
- Reinforcement Learning-based Self-adaptive Differential Evolution through Automated Landscape Feature Learning [7.765689048808507]
This paper introduces a novel MetaBBO method that supports automated feature learning during the meta-learning process.
We design an attention-based neural network with mantissa-exponent based embedding to transform the solution populations.
We also incorporate a comprehensive algorithm configuration space including diverse DE operators into a reinforcement learning-aided DAC paradigm.
arXiv Detail & Related papers (2025-03-23T13:07:57Z) - Neural Exploratory Landscape Analysis [12.6318861144205]
This paper proposes a novel framework that dynamically profiles landscape features through a two-stage, attention-based neural network.
NeurELA is pre-trained over a variety of MetaBBO algorithms using a multi-task neuroevolution strategy.
arXiv Detail & Related papers (2024-08-20T09:17:11Z) - Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [71.35604981129838]
Traditional gradient-based bi-level optimization algorithms are ill-suited to meet the demands of large-scale applications.
We introduce $(textFG)2textU$, which achieves an unbiased approximation of the meta gradient for bi-level optimization.
$(textFG)2textU$ is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems.
arXiv Detail & Related papers (2024-06-20T08:21:52Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Contextual Stochastic Bilevel Optimization [50.36775806399861]
We introduce contextual bilevel optimization (CSBO) -- a bilevel optimization framework with the lower-level problem minimizing an expectation on some contextual information and the upper-level variable.
It is important for applications such as meta-learning, personalized learning, end-to-end learning, and Wasserstein distributionally robustly optimization with side information (WDRO-SI)
arXiv Detail & Related papers (2023-10-27T23:24:37Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Symbolic Learning to Optimize: Towards Interpretability and Scalability [113.23813868412954]
Recent studies on Learning to Optimize (L2O) suggest a promising path to automating and accelerating the optimization procedure for complicated tasks.
Existing L2O models parameterize optimization rules by neural networks, and learn those numerical rules via meta-training.
In this paper, we establish a holistic symbolic representation and analysis framework for L2O.
We propose a lightweight L2O model that can be meta-trained on large-scale problems and outperformed human-designed and tuneds.
arXiv Detail & Related papers (2022-03-13T06:04:25Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Meta-Learning with Neural Tangent Kernels [58.06951624702086]
We propose the first meta-learning paradigm in the Reproducing Kernel Hilbert Space (RKHS) induced by the meta-model's Neural Tangent Kernel (NTK)
Within this paradigm, we introduce two meta-learning algorithms, which no longer need a sub-optimal iterative inner-loop adaptation as in the MAML framework.
We achieve this goal by 1) replacing the adaptation with a fast-adaptive regularizer in the RKHS; and 2) solving the adaptation analytically based on the NTK theory.
arXiv Detail & Related papers (2021-02-07T20:53:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.