Towards Next Era of Multi-objective Optimization: Large Language Models as Architects of Evolutionary Operators
- URL: http://arxiv.org/abs/2406.08987v1
- Date: Thu, 13 Jun 2024 10:35:16 GMT
- Title: Towards Next Era of Multi-objective Optimization: Large Language Models as Architects of Evolutionary Operators
- Authors: Yuxiao Huang, Shenghao Wu, Wenjie Zhang, Jibin Wu, Liang Feng, Kay Chen Tan,
- Abstract summary: Multi-objective optimization problems (MOPs) are prevalent in various real-world applications.
Large Language Models (LLMs) has revolutionized software engineering by enabling the autonomous development and refinement of programs.
We propose a new LLM-based framework for evolving EA operators, designed to address a wide array of MOPs.
- Score: 28.14607885386587
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-objective optimization problems (MOPs) are prevalent in various real-world applications, necessitating sophisticated solutions that balance conflicting objectives. Traditional evolutionary algorithms (EAs), while effective, often rely on domain-specific expert knowledge and iterative tuning, which can impede innovation when encountering novel MOPs. Very recently, the emergence of Large Language Models (LLMs) has revolutionized software engineering by enabling the autonomous development and refinement of programs. Capitalizing on this advancement, we propose a new LLM-based framework for evolving EA operators, designed to address a wide array of MOPs. This framework facilitates the production of EA operators without the extensive demands for expert intervention, thereby streamlining the design process. To validate the efficacy of our approach, we have conducted extensive empirical studies across various categories of MOPs. The results demonstrate the robustness and superior performance of our LLM-evolved operators.
Related papers
- Large Language Model Agent as a Mechanical Designer [7.136205674624813]
In this study, we present a novel approach that integrates pre-trained LLMs with a FEM module.
The FEM module evaluates each design and provides essential feedback, guiding the LLMs to continuously learn, plan, generate, and optimize designs without the need for domain-specific training.
Our results reveal that these LLM-based agents can successfully generate truss designs that comply with natural language specifications with a success rate of up to 90%, which varies according to the applied constraints.
arXiv Detail & Related papers (2024-04-26T16:41:24Z) - Large Multimodal Agents: A Survey [78.81459893884737]
Large language models (LLMs) have achieved superior performance in powering text-based AI agents.
There is an emerging research trend focused on extending these LLM-powered AI agents into the multimodal domain.
This review aims to provide valuable insights and guidelines for future research in this rapidly evolving field.
arXiv Detail & Related papers (2024-02-23T06:04:23Z) - Model Composition for Multimodal Large Language Models [73.70317850267149]
We propose a new paradigm through the model composition of existing MLLMs to create a new model that retains the modal understanding capabilities of each original model.
Our basic implementation, NaiveMC, demonstrates the effectiveness of this paradigm by reusing modality encoders and merging LLM parameters.
arXiv Detail & Related papers (2024-02-20T06:38:10Z) - Are Large Language Models Good Prompt Optimizers? [65.48910201816223]
We conduct a study to uncover the actual mechanism of LLM-based Prompt Optimization.
Our findings reveal that the LLMs struggle to identify the true causes of errors during reflection, tending to be biased by their own prior knowledge.
We introduce a new "Automatic Behavior Optimization" paradigm, which directly optimize the target model's behavior in a more controllable manner.
arXiv Detail & Related papers (2024-02-03T09:48:54Z) - Evolutionary Multi-Objective Optimization of Large Language Model
Prompts for Balancing Sentiments [0.0]
We propose a evolutionary multi-objective (EMO) approach specifically tailored for prompt optimization called EMO-Prompts.
Our results demonstrate that EMO-Prompts effectively generates prompts capable of guiding the LLM to produce texts embodying two conflicting emotions simultaneously.
arXiv Detail & Related papers (2024-01-18T10:21:15Z) - Machine Learning Insides OptVerse AI Solver: Design Principles and
Applications [74.67495900436728]
We present a comprehensive study on the integration of machine learning (ML) techniques into Huawei Cloud's OptVerse AI solver.
We showcase our methods for generating complex SAT and MILP instances utilizing generative models that mirror multifaceted structures of real-world problem.
We detail the incorporation of state-of-the-art parameter tuning algorithms which markedly elevate solver performance.
arXiv Detail & Related papers (2024-01-11T15:02:15Z) - Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts [74.40198929049959]
Large multi-modal models (LMMs) exhibit remarkable performance across numerous tasks.
generalist LMMs often suffer from performance degradation when tuned over a large collection of tasks.
We propose Omni-SMoLA, an architecture that uses the Soft MoE approach to mix many multimodal low rank experts.
arXiv Detail & Related papers (2023-12-01T23:04:27Z) - Large Language Model for Multi-objective Evolutionary Optimization [26.44390674048544]
Multiobjective evolutionary algorithms (MOEAs) are major methods for solving multiobjective optimization problems (MOPs)
Recent attempts have been made to replace the manually designed operators in MOEAs with learning-based operators.
This work investigates a novel approach that leverages the powerful large language model (LLM) to design MOEA operators.
arXiv Detail & Related papers (2023-10-19T07:46:54Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - A Survey on Learnable Evolutionary Algorithms for Scalable
Multiobjective Optimization [0.0]
Multiobjective evolutionary algorithms (MOEAs) have been adopted to solve various multiobjective optimization problems (MOPs)
However, these progressively improved MOEAs have not necessarily been equipped with sophisticatedly scalable and learnable problem-solving strategies.
Under different scenarios, it requires divergent thinking to design new powerful MOEAs for solving them effectively.
Research into learnable MOEAs that arm themselves with machine learning techniques for scaling-up MOPs has received extensive attention in the field of evolutionary computation.
arXiv Detail & Related papers (2022-06-23T08:16:01Z) - Decomposition Multi-Objective Evolutionary Optimization: From
State-of-the-Art to Future Opportunities [5.760976250387322]
We present a survey of the development of MOEA/D from its origin to the current state-of-the-art approaches.
selected major developments of MOEA/D are reviewed according to its core design components.
We shed some lights on emerging directions for future developments.
arXiv Detail & Related papers (2021-08-21T22:21:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.