ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
- URL: http://arxiv.org/abs/2402.01145v3
- Date: Mon, 14 Oct 2024 13:50:46 GMT
- Title: ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
- Authors: Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, Haeyeon Kim, Jinkyoo Park, Guojie Song,
- Abstract summary: This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics, featuring minimal manual intervention and open-ended spaces.
To empower LHHs, we presentive Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the white space, and reflections to provide verbal gradients within the space.
- Score: 35.39046514910755
- License:
- Abstract: The omnipresence of NP-hard combinatorial optimization problems (COPs) compels domain experts to engage in trial-and-error heuristic design. The long-standing endeavor of design automation has gained new momentum with the rise of large language models (LLMs). This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces. To empower LHHs, we present Reflective Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the heuristic space, and LLM reflections to provide verbal gradients within the space. Across five heterogeneous algorithmic types, six different COPs, and both white-box and black-box views of COPs, ReEvo yields state-of-the-art and competitive meta-heuristics, evolutionary algorithms, heuristics, and neural solvers, while being more sample-efficient than prior LHHs.
Related papers
- Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - Multi-objective Evolution of Heuristic Using Large Language Model [29.337470185034555]
Heuristics are commonly used to tackle diverse search and optimization problems.
Recent works have incorporated large language models (LLMs) into automatic search leveraging their powerful language and coding capacity.
We propose to model search as a multi-objective optimization problem and consider introducing other practical criteria beyond optimal performance.
arXiv Detail & Related papers (2024-09-25T12:32:41Z) - Understanding the Importance of Evolutionary Search in Automated Heuristic Design with Large Language Models [24.447539327343563]
Automated design (AHD) has gained considerable attention for its potential to automate the development of effectives.
The recent advent of large language models (LLMs) has paved a new avenue for AHD, with initial efforts focusing on framing AHD as an evolutionary program search problem.
arXiv Detail & Related papers (2024-07-15T16:21:20Z) - Generation Meets Verification: Accelerating Large Language Model Inference with Smart Parallel Auto-Correct Decoding [11.832919020149891]
This research aims to accelerate the inference speed of large language models (LLMs) with billions of parameters.
We propose textbfSmart textbfParallel textbfAuto-textbfCorrect dtextbfEcoding (SPACE)
arXiv Detail & Related papers (2024-02-19T03:39:10Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model [22.64392837434924]
EoH represents the ideas of thoughts in natural language, termed thoughts.
They are translated into executable codes by Large Language Models (LLMs)
EoH significantly outperforms widely-used human hand-crafted baseline algorithms for the online bin packing problem.
arXiv Detail & Related papers (2024-01-04T04:11:59Z) - Making LLaMA SEE and Draw with SEED Tokenizer [69.1083058794092]
We introduce SEED, an elaborate image tokenizer that empowers Large Language Models with the ability to SEE and Draw.
With SEED tokens, LLM is able to perform scalable multimodal autoregression under its original training recipe.
SEED-LLaMA has exhibited compositional emergent abilities such as multi-turn in-context multimodal generation.
arXiv Detail & Related papers (2023-10-02T14:03:02Z) - Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration [83.4031923134958]
Corex is a suite of novel general-purpose strategies that transform Large Language Models into autonomous agents.
Inspired by human behaviors, Corex is constituted by diverse collaboration paradigms including Debate, Review, and Retrieve modes.
We demonstrate that orchestrating multiple LLMs to work in concert yields substantially better performance compared to existing methods.
arXiv Detail & Related papers (2023-09-30T07:11:39Z) - Connecting Large Language Models with Evolutionary Algorithms Yields
Powerful Prompt Optimizers [70.18534453485849]
EvoPrompt is a framework for discrete prompt optimization.
It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.
It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.