Controlling the Mutation in Large Language Models for the Efficient Evolution of Algorithms
- URL: http://arxiv.org/abs/2412.03250v1
- Date: Wed, 04 Dec 2024 11:49:22 GMT
- Title: Controlling the Mutation in Large Language Models for the Efficient Evolution of Algorithms
- Authors: Haoran Yin, Anna V. Kononova, Thomas Bäck, Niki van Stein,
- Abstract summary: This paper introduces a novel approach to mutation control within evolutionary frameworks, inspired by theory of genetic algorithms.
We propose dynamic mutation prompts that adaptively regulate mutation rates, leveraging a heavy-tailed power-law distribution to balance exploration and exploitation.
Experiments show that the introduction of these dynamic rates can improve the convergence speed and adaptability of LLaMEA.
- Score: 2.2485774453793037
- License:
- Abstract: The integration of Large Language Models (LLMs) with evolutionary computation (EC) has introduced a promising paradigm for automating the design of metaheuristic algorithms. However, existing frameworks, such as the Large Language Model Evolutionary Algorithm (LLaMEA), often lack precise control over mutation mechanisms, leading to inefficiencies in solution space exploration and potentially suboptimal convergence. This paper introduces a novel approach to mutation control within LLM-driven evolutionary frameworks, inspired by theory of genetic algorithms. Specifically, we propose dynamic mutation prompts that adaptively regulate mutation rates, leveraging a heavy-tailed power-law distribution to balance exploration and exploitation. Experiments using GPT-3.5-turbo and GPT-4o models demonstrate that GPT-3.5-turbo fails to adhere to the specific mutation instructions, while GPT-4o is able to adapt its mutation based on the prompt engineered dynamic prompts. Further experiments show that the introduction of these dynamic rates can improve the convergence speed and adaptability of LLaMEA, when using GPT-4o. This work sets the starting point for better controlled LLM-based mutations in code optimization tasks, paving the way for further advancements in automated metaheuristic design.
Related papers
- ControllableGPT: A Ground-Up Designed Controllable GPT for Molecule Optimization [6.900025190052277]
We introduce ControllableGPT, a controllable training framework for large language models.
It is inspired by the biological processes of growth and evolution, which involve the expansion, shrinking, and mutation of sequences.
It enables the precise management of specific locations and ranges within a sequence, while maintaining the integrity of any specified positions or subsequences.
arXiv Detail & Related papers (2025-02-15T01:49:35Z) - A Simple yet Effective DDG Predictor is An Unsupervised Antibody Optimizer and Explainer [53.85265022754878]
We propose a lightweight DDG predictor (Light-DDG) for fast mutation screening.
We also release a large-scale dataset containing millions of mutation data for pre-training Light-DDG.
For the target antibody, we propose a novel Mutation Explainer to learn mutation preferences.
arXiv Detail & Related papers (2025-02-10T09:26:57Z) - Preparing Spin Squeezed States via Adaptive Genetic Algorithm [9.168152138847445]
We introduce a novel strategy employing an adaptive genetic algorithm (GA) for iterative optimization of control sequences to generate quantum nonclassical states.
Inspired by Darwinian evolution, the algorithm iteratively refines control sequences using crossover, mutation, and elimination strategies.
Our approach, compared to constant control schemes, yields a variety of control sequences capable of maintaining squeezing for the collective spin model.
arXiv Detail & Related papers (2024-10-20T12:15:11Z) - AffineQuant: Affine Transformation Quantization for Large Language Models [58.45460102764]
Post-Training Quantization (PTQ) has emerged as a subject of considerable interest due to its compression efficiency and cost-effectiveness in the context of training.
Existing PTQ methods for Large-scale Language Models (LLMs) limit the optimization scope to scaling transformations between pre- and post-quantization weights.
In this paper, we advocate for the direct optimization using equivalent Affine transformations in PTQ (AffineQuant)
arXiv Detail & Related papers (2024-03-19T08:40:21Z) - LLM Guided Evolution - The Automation of Models Advancing Models [0.0]
"Guided Evolution" (GE) is a novel framework that diverges from traditional machine learning approaches.
"Evolution of Thought" (EoT) enhances GE by enabling LLMs to reflect on and learn from the outcomes of previous mutations.
Our application of GE in evolving the ExquisiteNetV2 model demonstrates its efficacy.
arXiv Detail & Related papers (2024-03-18T03:44:55Z) - Benchmarking GPT-4 on Algorithmic Problems: A Systematic Evaluation of Prompting Strategies [47.129504708849446]
Large Language Models (LLMs) have revolutionized the field of Natural Language Processing.
LLMs lack systematic generalization, which allows to extrapolate the learned statistical regularities outside the training distribution.
In this work, we offer a systematic benchmarking of GPT-4, one of the most advanced LLMs available.
arXiv Detail & Related papers (2024-02-27T10:44:52Z) - Genetic Algorithm enhanced by Deep Reinforcement Learning in parent
selection mechanism and mutation : Minimizing makespan in permutation flow
shop scheduling problems [0.18846515534317265]
The proposed RL+GA method was specifically tested on the flow shop scheduling problem (FSP)
The hybrid algorithm incorporates neural networks (NN) and uses the off-policy method Q-learning.
Results of the study highlight the effectiveness of the RL+GA approach in improving the performance of the primitive GA.
arXiv Detail & Related papers (2023-11-10T08:51:42Z) - Towards Self-adaptive Mutation in Evolutionary Multi-Objective
Algorithms [10.609857097723266]
We study how self-adaptation influences multi-objective evolutionary algorithms.
We show that adapting the mutation rate based on single-objective optimization and hypervolume can speed up the convergence of GSEMO.
We propose a GSEMO with self-adaptive mutation, which considers optimizing for single objectives and adjusts the mutation rate for each solution individually.
arXiv Detail & Related papers (2023-03-08T14:26:46Z) - Manifold Gaussian Variational Bayes on the Precision Matrix [70.44024861252554]
We propose an optimization algorithm for Variational Inference (VI) in complex models.
We develop an efficient algorithm for Gaussian Variational Inference whose updates satisfy the positive definite constraint on the variational covariance matrix.
Due to its black-box nature, MGVBP stands as a ready-to-use solution for VI in complex models.
arXiv Detail & Related papers (2022-10-26T10:12:31Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.