B2Opt: Learning to Optimize Black-box Optimization with Little Budget
- URL: http://arxiv.org/abs/2304.11787v2
- Date: Tue, 25 Jul 2023 07:56:45 GMT
- Title: B2Opt: Learning to Optimize Black-box Optimization with Little Budget
- Authors: Xiaobin Li, Kai Wu, Xiaoyu Zhang, Handing Wang, Jing Liu
- Abstract summary: This paper designs a powerful optimization framework to automatically learn the optimization strategies from the target or cheap surrogate task without human intervention.
Deep neural network framework called B2Opt has a stronger representation of optimization strategies based on survival of the fittest.
Compared to the state-of-the-art BBO baselines, B2Opt can achieve multiple orders of magnitude performance improvement with less function evaluation cost.
- Score: 15.95406229086798
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The core challenge of high-dimensional and expensive black-box optimization
(BBO) is how to obtain better performance faster with little function
evaluation cost. The essence of the problem is how to design an efficient
optimization strategy tailored to the target task. This paper designs a
powerful optimization framework to automatically learn the optimization
strategies from the target or cheap surrogate task without human intervention.
However, current methods are weak for this due to poor representation of
optimization strategy. To achieve this, 1) drawing on the mechanism of genetic
algorithm, we propose a deep neural network framework called B2Opt, which has a
stronger representation of optimization strategies based on survival of the
fittest; 2) B2Opt can utilize the cheap surrogate functions of the target task
to guide the design of the efficient optimization strategies. Compared to the
state-of-the-art BBO baselines, B2Opt can achieve multiple orders of magnitude
performance improvement with less function evaluation cost. We validate our
proposal on high-dimensional synthetic functions and two real-world
applications. We also find that deep B2Opt performs better than shallow ones.
Related papers
- High-Dimensional Bayesian Optimization Using Both Random and Supervised Embeddings [0.6291443816903801]
This paper proposes a high-dimensionnal optimization method incorporating linear embedding subspaces of small dimension.
The resulting BO method combines in an adaptive way both random and supervised linear embeddings.
The obtained results show the high potential of EGORSE to solve high-dimensional blackbox optimization problems.
arXiv Detail & Related papers (2025-02-02T16:57:05Z) - Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [71.35604981129838]
Bi-level optimization has become a fundamental mathematical framework for addressing hierarchical machine learning problems.
Traditional gradient-based bi-level optimization algorithms are ill-suited to meet the demands of large-scale applications.
We introduce $(textFG)2textU$, which achieves an unbiased approximation of the meta gradient for bi-level optimization.
arXiv Detail & Related papers (2024-06-20T08:21:52Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - Learning Regions of Interest for Bayesian Optimization with Adaptive
Level-Set Estimation [84.0621253654014]
We propose a framework, called BALLET, which adaptively filters for a high-confidence region of interest.
We show theoretically that BALLET can efficiently shrink the search space, and can exhibit a tighter regret bound than standard BO.
arXiv Detail & Related papers (2023-07-25T09:45:47Z) - DADO -- Low-Cost Query Strategies for Deep Active Design Optimization [1.6298921134113031]
We present two selection strategies for self-optimization to reduce the computational cost in multi-objective design optimization problems.
We evaluate our strategies on a large dataset from the domain of fluid dynamics and introduce two new evaluation metrics to determine the model's performance.
arXiv Detail & Related papers (2023-07-10T13:01:27Z) - Large-Batch, Iteration-Efficient Neural Bayesian Design Optimization [37.339567743948955]
We present a novel Bayesian optimization framework specifically tailored to address the limitations of BO.
Our key contribution is a highly scalable, sample-based acquisition function that performs a non-dominated sorting of objectives.
We show that our acquisition function in combination with different Bayesian neural network surrogates is effective in data-intensive environments with a minimal number of iterations.
arXiv Detail & Related papers (2023-06-01T19:10:57Z) - DECN: Evolution Inspired Deep Convolution Network for Black-box Optimization [9.878660285945728]
This paper introduces the concept of Automated EA: Automated EA exploits structure in the problem of interest to automatically generate update rules.
We design a deep evolutionary convolution network (DECN) to realize the move from hand-designed EAs to automated EAs without manual interventions.
arXiv Detail & Related papers (2023-04-19T12:14:01Z) - A Simple Baseline for StyleGAN Inversion [133.5868210969111]
StyleGAN inversion plays an essential role in enabling the pretrained StyleGAN to be used for real facial image editing tasks.
Existing optimization-based methods can produce high quality results, but the optimization often takes a long time.
We present a new feed-forward network for StyleGAN inversion, with significant improvement in terms of efficiency and quality.
arXiv Detail & Related papers (2021-04-15T17:59:49Z) - Learning to Optimize: A Primer and A Benchmark [94.29436694770953]
Learning to optimize (L2O) is an emerging approach that leverages machine learning to develop optimization methods.
This article is poised to be the first comprehensive survey and benchmark of L2O for continuous optimization.
arXiv Detail & Related papers (2021-03-23T20:46:20Z) - Bayesian Optimization for Policy Search in High-Dimensional Systems via
Automatic Domain Selection [1.1240669509034296]
We propose to leverage results from optimal control to scale BO to higher dimensional control tasks.
We show how we can make use of a learned dynamics model in combination with a model-based controller to simplify the BO problem.
We present an experimental evaluation on real hardware, as well as simulated tasks including a 48-dimensional policy for a quadcopter.
arXiv Detail & Related papers (2020-01-21T09:04:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.