GPTOpt: Towards Efficient LLM-Based Black-Box Optimization
- URL: http://arxiv.org/abs/2510.25404v1
- Date: Wed, 29 Oct 2025 11:21:55 GMT
- Title: GPTOpt: Towards Efficient LLM-Based Black-Box Optimization
- Authors: Jamison Meindl, Yunsheng Tian, Tony Cui, Veronika Thost, Zhang-Wei Hong, Jie Chen, Wojciech Matusik, Mina Konaković Luković,
- Abstract summary: Large Language Models (LLMs) have shown broad capabilities, yet state-of-the-art models remain limited in solving continuous black-box optimization tasks.<n>We introduce GPTOpt, an LLM-based optimization method that equips LLMs with continuous black-box optimization capabilities.
- Score: 33.09351655863645
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Global optimization of expensive, derivative-free black-box functions demands extreme sample efficiency. Classical methods such as Bayesian Optimization (BO) can be effective, but they often require careful parameter tuning to each application domain. At the same time, Large Language Models (LLMs) have shown broad capabilities, yet state-of-the-art models remain limited in solving continuous black-box optimization tasks. We introduce GPTOpt, an LLM-based optimization method that equips LLMs with continuous black-box optimization capabilities. By fine-tuning large language models on extensive synthetic datasets derived from diverse BO parameterizations, GPTOpt leverages LLM pre-training to generalize across optimization tasks. On a variety of black-box optimization benchmarks, GPTOpt surpasses traditional optimizers, highlighting the capacity of LLMs for advanced numerical reasoning and introducing a flexible framework for global optimization without parameter tuning.
Related papers
- A Meta-Knowledge-Augmented LLM Framework for Hyperparameter Optimization in Time-Series Forecasting [0.0]
We introduce LLM-AutoOpt, a hybrid HPO framework that combines BO with LLM-based contextual reasoning.<n>We show that LLM-AutoOpt achieves improved predictive performance and more interpretable optimization behavior compared to BO and LLM baselines without meta-knowledge.
arXiv Detail & Related papers (2026-02-01T21:26:57Z) - Task-free Adaptive Meta Black-box Optimization [55.461814601130044]
We propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task.<n>Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop parameter learning mechanism, where parameterized evolutionary operators continuously self-update.<n>This paradigm shift enables zero-shot optimization: ABOM competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks.
arXiv Detail & Related papers (2026-01-29T09:54:10Z) - ZeroShotOpt: Towards Zero-Shot Pretrained Models for Efficient Black-Box Optimization [31.894110383242566]
We present ZeroShot, a general-purpose, pretrained model for continuous black-box optimization tasks ranging from 2D to 20D.<n>Our approach leverages offline reinforcement learning on large-scale optimization tasks collected from 12 BO variants.
arXiv Detail & Related papers (2025-10-03T14:33:23Z) - Large Language Model Assisted Automated Algorithm Generation and Evolution via Meta-black-box optimization [9.184788298623062]
AwesomeDE is proposed that leverages large language models (LLMs) as the strategy of meta-optimizer to generate update rules for constrained evolutionary algorithm without human intervention.<n>Key components, including prompt design and iterative refinement, are systematically analyzed to determine their impact on design quality.<n> Experimental results demonstrate that the proposed approach outperforms existing methods in terms of computational efficiency and solution accuracy.
arXiv Detail & Related papers (2025-09-16T17:02:24Z) - Align-Pro: A Principled Approach to Prompt Optimization for LLM Alignment [40.71270945505082]
Large language models (LLMs) are increasingly integrated into various societal and decision-making processes.<n>Traditional methods, such as reinforcement learning from human feedback (RLHF), achieve alignment by fine-tuning model parameters.<n>In contrast, prompt optimization is a viable alternative to RLHF for LLM alignment.
arXiv Detail & Related papers (2025-01-07T03:14:39Z) - Optima: Optimizing Effectiveness and Efficiency for LLM-Based Multi-Agent System [75.25394449773052]
Large Language Model (LLM) based multi-agent systems (MAS) show remarkable potential in collaborative problem-solving.<n>Yet they still face critical challenges: low communication efficiency, poor scalability, and a lack of effective parameter-updating optimization methods.<n>We present Optima, a novel framework that addresses these issues by significantly enhancing both communication efficiency and task effectiveness.
arXiv Detail & Related papers (2024-10-10T17:00:06Z) - LLM as a Complementary Optimizer to Gradient Descent: A Case Study in Prompt Tuning [69.95292905263393]
We show that gradient-based and high-level LLMs can effectively collaborate a combined optimization framework.<n>In this paper, we show that these complementary to each other and can effectively collaborate a combined optimization framework.
arXiv Detail & Related papers (2024-05-30T06:24:14Z) - Pretrained Optimization Model for Zero-Shot Black Box Optimization [16.391389860521134]
We propose a Pretrained Optimization Model (POM) that leverages knowledge gained from optimizing diverse tasks.<n>POM offers efficient solutions to zero-shot optimization through direct application or fine-tuning with few-shot samples.<n>Fine-tuning POM with a small number of samples and budget yields significant performance improvements.
arXiv Detail & Related papers (2024-05-06T09:11:49Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - A General Framework for User-Guided Bayesian Optimization [51.96352579696041]
We propose ColaBO, the first Bayesian-principled framework for prior beliefs beyond the typical kernel structure.
We empirically demonstrate ColaBO's ability to substantially accelerate optimization when the prior information is accurate, and to retain approximately default performance when it is misleading.
arXiv Detail & Related papers (2023-11-24T18:27:26Z) - Large Language Models as Optimizers [106.52386531624532]
We propose Optimization by PROmpting (OPRO), a simple and effective approach to leverage large language models (LLMs) as prompts.
In each optimization step, the LLM generates new solutions from the prompt that contains previously generated solutions with their values.
We demonstrate that the best prompts optimized by OPRO outperform human-designed prompts by up to 8% on GSM8K, and by up to 50% on Big-Bench Hard tasks.
arXiv Detail & Related papers (2023-09-07T00:07:15Z) - Multi-Objective Hyperparameter Optimization in Machine Learning -- An Overview [10.081056751778712]
We introduce the basics of multi-objective hyperparameter optimization and motivate its usefulness in applied ML.
We provide an extensive survey of existing optimization strategies, both from the domain of evolutionary algorithms and Bayesian optimization.
We illustrate the utility of MOO in several specific ML applications, considering objectives such as operating conditions, prediction time, sparseness, fairness, interpretability and robustness.
arXiv Detail & Related papers (2022-06-15T10:23:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.