Behaviour Space Analysis of LLM-driven Meta-heuristic Discovery
- URL: http://arxiv.org/abs/2507.03605v1
- Date: Fri, 04 Jul 2025 14:19:39 GMT
- Title: Behaviour Space Analysis of LLM-driven Meta-heuristic Discovery
- Authors: Niki van Stein, Haoran Yin, Anna V. Kononova, Thomas Bäck, Gabriela Ochoa,
- Abstract summary: We investigate meta-heuristic optimisation algorithms automatically generated by Large Language Model driven algorithm discovery methods.<n>We iteratively evolve black-box optimisations, evaluated on 10 functions from the BBOB benchmark suite.<n>We log behavioural metrics including exploration, exploitation, convergence and stagnation measures, for each run, and analyse these via visual projections and network-based representations.
- Score: 2.0958996366233094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We investigate the behaviour space of meta-heuristic optimisation algorithms automatically generated by Large Language Model driven algorithm discovery methods. Using the Large Language Evolutionary Algorithm (LLaMEA) framework with a GPT o4-mini LLM, we iteratively evolve black-box optimisation heuristics, evaluated on 10 functions from the BBOB benchmark suite. Six LLaMEA variants, featuring different mutation prompt strategies, are compared and analysed. We log dynamic behavioural metrics including exploration, exploitation, convergence and stagnation measures, for each run, and analyse these via visual projections and network-based representations. Our analysis combines behaviour-based projections, Code Evolution Graphs built from static code features, performance convergence curves, and behaviour-based Search Trajectory Networks. The results reveal clear differences in search dynamics and algorithm structures across LLaMEA configurations. Notably, the variant that employs both a code simplification prompt and a random perturbation prompt in a 1+1 elitist evolution strategy, achieved the best performance, with the highest Area Over the Convergence Curve. Behaviour-space visualisations show that higher-performing algorithms exhibit more intensive exploitation behaviour and faster convergence with less stagnation. Our findings demonstrate how behaviour-space analysis can explain why certain LLM-designed heuristics outperform others and how LLM-driven algorithm discovery navigates the open-ended and complex search space of algorithms. These findings provide insights to guide the future design of adaptive LLM-driven algorithm generators.
Related papers
- Automated Algorithmic Discovery for Gravitational-Wave Detection Guided by LLM-Informed Evolutionary Monte Carlo Tree Search [10.617016967920863]
Evo-MCTS is a framework that combines tree-structured search with evolutionary optimization and large language models to create interpretable algorithmic solutions.<n>Our framework achieves a 20.2% improvement over state-of-the-art gravitational wave detection algorithms on the MLG-1WSC benchmark dataset.
arXiv Detail & Related papers (2025-08-05T17:18:20Z) - Agentic Reinforced Policy Optimization [66.96989268893932]
Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks.<n>Current RL algorithms inadequately balance the models' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions.<n>We propose Agentic Reinforced Policy Optimization (ARPO), a novel agentic RL algorithm tailored for training multi-turn LLM-based agents.
arXiv Detail & Related papers (2025-07-26T07:53:11Z) - Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization [5.481047026874548]
This paper presents a novel graph neural network (GNN) based DRL to configure multi-objective evolutionary algorithms.<n>We model the dynamic algorithm configuration as a Markov decision process, representing the convergence of solutions in the objective space by a graph.<n> Experiments on diverse MOCO challenges indicate that our method outperforms traditional and DRL-based algorithm configuration methods in terms of efficacy and adaptability.
arXiv Detail & Related papers (2025-05-22T09:53:54Z) - Fitness Landscape of Large Language Model-Assisted Automated Algorithm Search [15.767411435705752]
We show and analyze the fitness landscape of Large Language Models-assisted Algorithm Search.<n>Our findings reveal that LAS landscapes are highly multimodal and rugged.<n>We also demonstrate how population size influences exploration-exploitation trade-offs and the evolving trajectory of elite algorithms.
arXiv Detail & Related papers (2025-04-28T09:52:41Z) - Algorithm Discovery With LLMs: Evolutionary Search Meets Reinforcement Learning [12.037588566211348]
We propose to augment evolutionary search by continuously refining the search operator through reinforcement learning (RL) fine-tuning.<n>Our experiments demonstrate that integrating RL with evolutionary search accelerates the discovery of superior algorithms.
arXiv Detail & Related papers (2025-04-07T14:14:15Z) - Optimizing Photonic Structures with Large Language Model Driven Algorithm Discovery [2.2485774453793037]
We introduce structured prompt engineering tailored to multilayer photonic problems such as Bragg mirror, ellipsometry inverse analysis, and solar cell antireflection coatings.<n>We explore multiple evolutionary strategies, including (1+1), (1+5), (2+10), and others, to balance exploration and exploitation.<n>Our experiments show that LLM-generated algorithms, generated using small-scale problem instances, can match or surpass established methods.
arXiv Detail & Related papers (2025-03-25T15:05:25Z) - EVOLvE: Evaluating and Optimizing LLMs For Exploration [76.66831821738927]
Large language models (LLMs) remain under-studied in scenarios requiring optimal decision-making under uncertainty.
We measure LLMs' (in)ability to make optimal decisions in bandits, a state-less reinforcement learning setting relevant to many applications.
Motivated by the existence of optimal exploration algorithms, we propose efficient ways to integrate this algorithmic knowledge into LLMs.
arXiv Detail & Related papers (2024-10-08T17:54:03Z) - On the Design and Analysis of LLM-Based Algorithms [74.7126776018275]
Large language models (LLMs) are used as sub-routines in algorithms.
LLMs have achieved remarkable empirical success.
Our proposed framework holds promise for advancing LLM-based algorithms.
arXiv Detail & Related papers (2024-07-20T07:39:07Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.