Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach
- URL: http://arxiv.org/abs/2410.02301v1
- Date: Thu, 3 Oct 2024 08:37:02 GMT
- Title: Large Language Model Aided Multi-objective Evolutionary Algorithm: a Low-cost Adaptive Approach
- Authors: Wanyi Liu, Long Chen, Zhenzhou Tang,
- Abstract summary: This study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.
We leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM.
- Score: 4.442101733807905
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-objective optimization is a common problem in practical applications, and multi-objective evolutionary algorithm (MOEA) is considered as one of the effective methods to solve these problems. However, their randomness sometimes prevents algorithms from rapidly converging to global optimization, and the design of their genetic operators often requires complicated manual tuning. To overcome this challenge, this study proposes a new framework that combines a large language model (LLM) with traditional evolutionary algorithms to enhance the algorithm's search capability and generalization performance.In our framework, we employ adaptive and hybrid mechanisms to integrate the LLM with the MOEA, thereby accelerating algorithmic convergence. Specifically, we leverage an auxiliary evaluation function and automated prompt construction within the adaptive mechanism to flexibly adjust the utilization of the LLM, generating high-quality solutions that are further refined and optimized through genetic operators.Concurrently, the hybrid mechanism aims to minimize interaction costs with the LLM as much as possible.
Related papers
- Deep Insights into Automated Optimization with Large Language Models and Evolutionary Algorithms [3.833708891059351]
Large Language Models (LLMs) and Evolutionary Algorithms (EAs) offer promising new approach to overcome limitations and make optimization more automated.
LLMs act as dynamic agents that can generate, refine, and interpret optimization strategies.
EAs efficiently explore complex solution spaces through evolutionary operators.
arXiv Detail & Related papers (2024-10-28T09:04:49Z) - Modified CMA-ES Algorithm for Multi-Modal Optimization: Incorporating Niching Strategies and Dynamic Adaptation Mechanism [0.03495246564946555]
This study modifies the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm for multi-modal optimization problems.
The enhancements focus on addressing the challenges of multiple global minima, improving the algorithm's ability to maintain diversity and explore complex fitness landscapes.
We incorporate niching strategies and dynamic adaptation mechanisms to refine the algorithm's performance in identifying and optimizing multiple global optima.
arXiv Detail & Related papers (2024-07-01T03:41:39Z) - Large Language Model-Aided Evolutionary Search for Constrained Multiobjective Optimization [15.476478159958416]
We employ a large language model (LLM) to enhance evolutionary search for solving constrained multi-objective optimization problems.
Our aim is to speed up the convergence of the evolutionary population.
arXiv Detail & Related papers (2024-05-09T13:44:04Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Federated Multi-Level Optimization over Decentralized Networks [55.776919718214224]
We study the problem of distributed multi-level optimization over a network, where agents can only communicate with their immediate neighbors.
We propose a novel gossip-based distributed multi-level optimization algorithm that enables networked agents to solve optimization problems at different levels in a single timescale.
Our algorithm achieves optimal sample complexity, scaling linearly with the network size, and demonstrates state-of-the-art performance on various applications.
arXiv Detail & Related papers (2023-10-10T00:21:10Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Multi-surrogate Assisted Efficient Global Optimization for Discrete
Problems [0.9127162004615265]
This paper investigates the possible benefit of a concurrent utilization of multiple simulation-based surrogate models to solve discrete problems.
Our findings indicate that SAMA-DiEGO can rapidly converge to better solutions on a majority of the test problems.
arXiv Detail & Related papers (2022-12-13T09:10:08Z) - Enhanced Innovized Repair Operator for Evolutionary Multi- and
Many-objective Optimization [5.885238773559015]
"Innovization" is a task of learning common relationships among some or all of the Pareto-optimal (PO) solutions in optimisation problems.
Recent studies have shown that a chronological sequence of non-dominated solutions also possess salient patterns that can be used to learn problem features.
We propose a machine-learning- (ML-) assisted modelling approach that learns the modifications in design variables needed to advance population members towards the Pareto-optimal set.
arXiv Detail & Related papers (2020-11-21T10:29:15Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.