Large Language Model Assisted Automated Algorithm Generation and Evolution via Meta-black-box optimization
- URL: http://arxiv.org/abs/2509.13251v2
- Date: Fri, 19 Sep 2025 01:51:52 GMT
- Title: Large Language Model Assisted Automated Algorithm Generation and Evolution via Meta-black-box optimization
- Authors: Xu Yang, Rui Wang, Kaiwen Li, Wenhua Li, Weixiong Huang,
- Abstract summary: AwesomeDE is proposed that leverages large language models (LLMs) as the strategy of meta-optimizer to generate update rules for constrained evolutionary algorithm without human intervention.<n>Key components, including prompt design and iterative refinement, are systematically analyzed to determine their impact on design quality.<n> Experimental results demonstrate that the proposed approach outperforms existing methods in terms of computational efficiency and solution accuracy.
- Score: 9.184788298623062
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Meta-black-box optimization has been significantly advanced through the use of large language models (LLMs), yet in fancy on constrained evolutionary optimization. In this work, AwesomeDE is proposed that leverages LLMs as the strategy of meta-optimizer to generate update rules for constrained evolutionary algorithm without human intervention. On the meanwhile, $RTO^2H$ framework is introduced for standardize prompt design of LLMs. The meta-optimizer is trained on a diverse set of constrained optimization problems. Key components, including prompt design and iterative refinement, are systematically analyzed to determine their impact on design quality. Experimental results demonstrate that the proposed approach outperforms existing methods in terms of computational efficiency and solution accuracy. Furthermore, AwesomeDE is shown to generalize well across distinct problem domains, suggesting its potential for broad applicability. This research contributes to the field by providing a scalable and data-driven methodology for automated constrained algorithm design, while also highlighting limitations and directions for future work.
Related papers
- From Heuristic Selection to Automated Algorithm Design: LLMs Benefit from Strong Priors [4.253872963674906]
Large Language Models (LLMs) have been widely adopted for automated algorithm design.<n>We show that providing high-quality algorithmic code examples can substantially improve the performance of the LLM-driven optimization.
arXiv Detail & Related papers (2026-03-03T09:27:52Z) - Task-free Adaptive Meta Black-box Optimization [55.461814601130044]
We propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task.<n>Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop parameter learning mechanism, where parameterized evolutionary operators continuously self-update.<n>This paradigm shift enables zero-shot optimization: ABOM competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks.
arXiv Detail & Related papers (2026-01-29T09:54:10Z) - LLMize: A Framework for Large Language Model-Based Numerical Optimization [0.0]
Large language models (LLMs) have recently shown strong reasoning capabilities beyond traditional language tasks.<n>This paper presents LLMize, an open-source Python framework that enables LLM-driven optimization.
arXiv Detail & Related papers (2025-12-30T20:05:30Z) - Generalizable Heuristic Generation Through Large Language Models with Meta-Optimization [14.919482411153185]
Heuristic design with large language models (LLMs) has emerged as a promising approach for tackling optimization problems.<n>Existing approaches often rely on manually predefined evolutionary generalizations and single-task training schemes.<n>We propose Meta-Optimization of Heuristics (MoH), a novel framework that operates at the level of meta-learning.
arXiv Detail & Related papers (2025-05-27T08:26:27Z) - From Understanding to Excelling: Template-Free Algorithm Design through Structural-Functional Co-Evolution [39.42526347710991]
Large language models (LLMs) have greatly accelerated the automation of algorithm generation and optimization.<n>We introduce an end-to-end algorithm generation and optimization framework based on LLMs.<n>Our approach utilizes the deep semantic understanding of LLMs to convert natural language requirements or human-authored papers into code solutions.
arXiv Detail & Related papers (2025-03-13T08:26:18Z) - Reinforcement learning Based Automated Design of Differential Evolution Algorithm for Black-box Optimization [14.116216795259554]
Differential evolution (DE) algorithm is recognized as one of the most effective evolutionary algorithms.<n>We introduce a novel framework that employs reinforcement learning (RL) to automatically design DE for black-box optimization.<n>RL acts as an advanced meta-optimizer, generating a customized DE configuration.
arXiv Detail & Related papers (2025-01-22T13:41:47Z) - A Survey on Inference Optimization Techniques for Mixture of Experts Models [50.40325411764262]
Large-scale Mixture of Experts (MoE) models offer enhanced model capacity and computational efficiency through conditional computation.<n> deploying and running inference on these models presents significant challenges in computational resources, latency, and energy efficiency.<n>This survey analyzes optimization techniques for MoE models across the entire system stack.
arXiv Detail & Related papers (2024-12-18T14:11:15Z) - Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms [3.833708891059351]
Large Language Models (LLMs) and Evolutionary Algorithms (EAs) offer promising new approach to overcome limitations and make optimization more automated.
LLMs act as dynamic agents that can generate, refine, and interpret optimization strategies.
EAs efficiently explore complex solution spaces through evolutionary operators.
arXiv Detail & Related papers (2024-10-28T09:04:49Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Unleashing the Potential of Large Language Models as Prompt Optimizers: Analogical Analysis with Gradient-based Model Optimizers [108.72225067368592]
We propose a novel perspective to investigate the design of large language models (LLMs)-based prompts.<n>We identify two pivotal factors in model parameter learning: update direction and update method.<n>We develop a capable Gradient-inspired Prompt-based GPO.
arXiv Detail & Related papers (2024-02-27T15:05:32Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.