LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics
- URL: http://arxiv.org/abs/2405.20132v3
- Date: Tue, 20 Aug 2024 11:06:09 GMT
- Title: LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics
- Authors: Niki van Stein, Thomas Bäck,
- Abstract summary: This paper introduces a novel Large Language Model Evolutionary Algorithm (LLaMEA) framework.
Given a set of criteria and a task definition (the search space), LLaMEA iteratively generates, mutates and selects algorithms.
We show how this framework can be used to generate novel black-box metaheuristic optimization algorithms automatically.
- Score: 0.023020018305241332
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) such as GPT-4 have demonstrated their ability to understand natural language and generate complex code snippets. This paper introduces a novel Large Language Model Evolutionary Algorithm (LLaMEA) framework, leveraging GPT models for the automated generation and refinement of algorithms. Given a set of criteria and a task definition (the search space), LLaMEA iteratively generates, mutates and selects algorithms based on performance metrics and feedback from runtime evaluations. This framework offers a unique approach to generating optimized algorithms without requiring extensive prior expertise. We show how this framework can be used to generate novel black-box metaheuristic optimization algorithms automatically. LLaMEA generates multiple algorithms that outperform state-of-the-art optimization algorithms (Covariance Matrix Adaptation Evolution Strategy and Differential Evolution) on the five dimensional black box optimization benchmark (BBOB). The algorithms also show competitive performance on the 10- and 20-dimensional instances of the test functions, although they have not seen such instances during the automated generation process. The results demonstrate the feasibility of the framework and identify future directions for automated generation and optimization of algorithms via LLMs.
Related papers
- Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach [4.442101733807905]
This study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.
We leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM.
arXiv Detail & Related papers (2024-10-03T08:37:02Z) - Designing Network Algorithms via Large Language Models [11.055072300500104]
We introduce NADA, the first framework to autonomously design network algorithms by leveraging the generative capabilities of large language models (LLMs)
We demonstrate that NADA produces novel ABR algorithms that consistently outperform the original algorithm in diverse network environments, including broadband, satellite, 4G, and 5G.
arXiv Detail & Related papers (2024-04-02T03:43:55Z) - Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Algorithm Evolution Using Large Language Model [18.03090066194074]
We propose a novel approach called Evolution Algorithm using Large Language Model (AEL)
AEL does algorithm-level evolution without model training.
Human effort and requirements for domain knowledge can be significantly reduced.
arXiv Detail & Related papers (2023-11-26T09:38:44Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Meta Learning Black-Box Population-Based Optimizers [0.0]
We propose the use of meta-learning to infer population-based blackbox generalizations.
We show that the meta-loss function encourages a learned algorithm to alter its search behavior so that it can easily fit into a new context.
arXiv Detail & Related papers (2021-03-05T08:13:25Z) - Evolving Reinforcement Learning Algorithms [186.62294652057062]
We propose a method for meta-learning reinforcement learning algorithms.
The learned algorithms are domain-agnostic and can generalize to new environments not seen during training.
We highlight two learned algorithms which obtain good generalization performance over other classical control tasks, gridworld type tasks, and Atari games.
arXiv Detail & Related papers (2021-01-08T18:55:07Z) - Automatic Generation of Algorithms for Black-Box Robust Optimisation
Problems [0.0]
We develop algorithms capable of tackling robust black-box optimisation problems, where the number of model runs is limited.
We employ an automatic generation of algorithms approach: Grammar-Guided Genetic Programming.
Our algorithmic building blocks combine elements of existing techniques and new features, resulting in the investigation of a novel solution space.
arXiv Detail & Related papers (2020-04-15T18:51:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.