LLM Driven Design of Continuous Optimization Problems with Controllable High-level Properties
- URL: http://arxiv.org/abs/2601.18846v1
- Date: Mon, 26 Jan 2026 12:34:52 GMT
- Title: LLM Driven Design of Continuous Optimization Problems with Controllable High-level Properties
- Authors: Urban Skvorc, Niki van Stein, Moritz Seiler, Britta Grimme, Thomas Bäck, Heike Trautmann,
- Abstract summary: Benchmarking in continuous black-box optimisation is hindered by the limited structural diversity of existing test suites such as BBOB.<n>We explore whether large language models embedded in an evolutionary loop can be used to design optimisation problems with clearly defined high-level landscape characteristics.
- Score: 2.0539269837079392
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Benchmarking in continuous black-box optimisation is hindered by the limited structural diversity of existing test suites such as BBOB. We explore whether large language models embedded in an evolutionary loop can be used to design optimisation problems with clearly defined high-level landscape characteristics. Using the LLaMEA framework, we guide an LLM to generate problem code from natural-language descriptions of target properties, including multimodality, separability, basin-size homogeneity, search-space homogeneity and globallocal optima contrast. Inside the loop we score candidates through ELA-based property predictors. We introduce an ELA-space fitness-sharing mechanism that increases population diversity and steers the generator away from redundant landscapes. A complementary basin-of-attraction analysis, statistical testing and visual inspection, verifies that many of the generated functions indeed exhibit the intended structural traits. In addition, a t-SNE embedding shows that they expand the BBOB instance space rather than forming an unrelated cluster. The resulting library provides a broad, interpretable, and reproducible set of benchmark problems for landscape analysis and downstream tasks such as automated algorithm selection.
Related papers
- Automatic Design of Optimization Test Problems with Large Language Models [0.0]
We introduce Evolution of Test Functions (EoTF), a framework that automatically generates continuous optimization test functions whose landscapes match a target ELA feature vector.<n>EoTF produces non-trivial functions with closely matching ELA characteristics and preserves performance rankings under fixed evaluation budgets.<n>Overall, EoTF offers a practical route to scalable, portable, and interpretable benchmark generation properties.
arXiv Detail & Related papers (2026-02-02T19:42:14Z) - MO-ELA: Rigorously Expanding Exploratory Landscape Features for Automated Algorithm Selection in Continuous Multi-Objective Optimisation [1.2832858109291982]
We propose a novel and complementary set of features (MO-ELA) for box-constrained continuous optimisation problems.<n>These features are based on a random sample of points considering both the decision and objective space.<n>An AAS study conducted on well-established multi-objective benchmarks demonstrates that the proposed features contribute to successfully distinguishing between algorithm performance.
arXiv Detail & Related papers (2026-01-24T12:30:42Z) - Multi-Objective Hierarchical Optimization with Large Language Models [41.41567058185742]
Large Language Models (LLMs) are not the off-the-shelf choice to drive multi-objective optimization yet.<n>In this paper, we close this gap by leveraging LLMs as surrogate models and candidate samplers inside a structured hierarchical search strategy.
arXiv Detail & Related papers (2026-01-20T12:10:13Z) - Neural Nonmyopic Bayesian Optimization in Dynamic Cost Settings [73.44599934855067]
LookaHES is a nonmyopic BO framework designed for dynamic, history-dependent cost environments.<n>LookaHES combines a multi-step variant of $H$-Entropy Search with pathwise sampling and neural policy optimization.<n>Our innovation is the integration of neural policies, including large language models, to effectively navigate structured, domain-specific action spaces.
arXiv Detail & Related papers (2026-01-10T09:49:45Z) - BAMBO: Construct Ability and Efficiency LLM Pareto Set via Bayesian Adaptive Multi-objective Block-wise Optimization [4.196004665145396]
BAMBO (Bayesian Adaptive Multi-objective Block-wise Optimization) is a novel framework that automatically constructs the Large Language Models (LLMs)<n>Formulated as a 1D clustering problem, this strategy leverages a dynamic programming approach to optimally balance intra-blockvolume and inter-block information distribution.
arXiv Detail & Related papers (2025-12-10T15:32:56Z) - PORTAL: Controllable Landscape Generator for Continuous Optimization-Part I: Framework [6.776726767599686]
This paper introduces PORTAL, a benchmark generator for continuous optimization research.<n>It provides fine-grained, independent control over basin curvature, conditioning, variable interactions, and surface ruggedness.<n>It also facilitates the creation of diverse datasets for meta-algorithmic research, tailored benchmark suite design, and interactive educational use.
arXiv Detail & Related papers (2025-11-29T02:57:13Z) - BLADE: Benchmark suite for LLM-driven Automated Design and Evolution of iterative optimisation heuristics [2.2485774453793037]
BLADE is a framework for benchmarking LLM-driven AAD methods in a continuous black-box optimisation context.<n>It integrates benchmark problems with instance generators and textual descriptions aimed at capability-focused testing, such as specialisation and information exploitation.<n> BLADE provides an out-of-the-box' solution to systematically evaluate LLM-driven AAD approaches.
arXiv Detail & Related papers (2025-04-28T18:34:09Z) - Modeling All Response Surfaces in One for Conditional Search Spaces [69.90317997694218]
This paper proposes a novel approach to model the response surfaces of all subspaces in one.<n>We introduce an attention-based deep feature extractor, capable of projecting configurations with different structures from various subspaces into a unified feature space.
arXiv Detail & Related papers (2025-01-08T03:56:06Z) - Multi-Attribute Constraint Satisfaction via Language Model Rewriting [67.5778646504987]
Multi-Attribute Constraint Satisfaction (MACS) is a method capable of finetuning language models to satisfy user-specified constraints on multiple external real-value attributes.<n>Our work opens new avenues for generalized and real-value multi-attribute control, with implications for diverse applications spanning NLP and bioinformatics.
arXiv Detail & Related papers (2024-12-26T12:36:39Z) - Efficient High-Resolution Visual Representation Learning with State Space Model for Human Pose Estimation [60.80423207808076]
Capturing long-range dependencies while preserving high-resolution visual representations is crucial for dense prediction tasks such as human pose estimation.<n>We propose the Dynamic Visual State Space (DVSS) block, which augments visual state space models with multi-scale convolutional operations.<n>We build HRVMamba, a novel model for efficient high-resolution representation learning.
arXiv Detail & Related papers (2024-10-04T06:19:29Z) - Robust Model-Based Optimization for Challenging Fitness Landscapes [96.63655543085258]
Protein design involves optimization on a fitness landscape.
Leading methods are challenged by sparsity of high-fitness samples in the training set.
We show that this problem of "separation" in the design space is a significant bottleneck in existing model-based optimization tools.
We propose a new approach that uses a novel VAE as its search model to overcome the problem.
arXiv Detail & Related papers (2023-05-23T03:47:32Z) - A Pareto-optimal compositional energy-based model for sampling and
optimization of protein sequences [55.25331349436895]
Deep generative models have emerged as a popular machine learning-based approach for inverse problems in the life sciences.
These problems often require sampling new designs that satisfy multiple properties of interest in addition to learning the data distribution.
arXiv Detail & Related papers (2022-10-19T19:04:45Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.