ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
- URL: http://arxiv.org/abs/2402.01145v3
- Date: Mon, 14 Oct 2024 13:50:46 GMT
- Title: ReEvo: Large Language Models as Hyper-Heuristics with Reflective Evolution
- Authors: Haoran Ye, Jiarui Wang, Zhiguang Cao, Federico Berto, Chuanbo Hua, Haeyeon Kim, Jinkyoo Park, Guojie Song,
- Abstract summary: This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics, featuring minimal manual intervention and open-ended spaces.
To empower LHHs, we presentive Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the white space, and reflections to provide verbal gradients within the space.
- Score: 35.39046514910755
- License:
- Abstract: The omnipresence of NP-hard combinatorial optimization problems (COPs) compels domain experts to engage in trial-and-error heuristic design. The long-standing endeavor of design automation has gained new momentum with the rise of large language models (LLMs). This paper introduces Language Hyper-Heuristics (LHHs), an emerging variant of Hyper-Heuristics that leverages LLMs for heuristic generation, featuring minimal manual intervention and open-ended heuristic spaces. To empower LHHs, we present Reflective Evolution (ReEvo), a novel integration of evolutionary search for efficiently exploring the heuristic space, and LLM reflections to provide verbal gradients within the space. Across five heterogeneous algorithmic types, six different COPs, and both white-box and black-box views of COPs, ReEvo yields state-of-the-art and competitive meta-heuristics, evolutionary algorithms, heuristics, and neural solvers, while being more sample-efficient than prior LHHs.
Related papers
- Think More, Hallucinate Less: Mitigating Hallucinations via Dual Process of Fast and Slow Thinking [124.69672273754144]
HaluSearch is a novel framework that incorporates tree search-based algorithms.
It frames text generation as a step-by-step reasoning process.
We introduce a hierarchical thinking system switch mechanism inspired by the dual process theory in cognitive science.
arXiv Detail & Related papers (2025-01-02T15:36:50Z) - HSEvo: Elevating Automatic Heuristic Design with Diversity-Driven Harmony Search and Genetic Algorithm Using LLMs [7.04316974339151]
Heuristic Design is an active research area due to its utility in solving complex search and NP-hard optimization problems.
We introduce HSEvo, an adaptive LLM-EPS framework that maintains a balance between diversity and convergence with a harmony search.
arXiv Detail & Related papers (2024-12-19T16:07:00Z) - Unified Generative and Discriminative Training for Multi-modal Large Language Models [88.84491005030316]
Generative training has enabled Vision-Language Models (VLMs) to tackle various complex tasks.
Discriminative training, exemplified by models like CLIP, excels in zero-shot image-text classification and retrieval.
This paper proposes a unified approach that integrates the strengths of both paradigms.
arXiv Detail & Related papers (2024-11-01T01:51:31Z) - Multi-objective Evolution of Heuristic Using Large Language Model [29.337470185034555]
We model the search as a multi-objective optimization problem and consider introducing additional practical criteria beyond optimal performance.
We propose the first multi-objective search framework, Multi-objective Evolution of Heuristic (MEoH)
arXiv Detail & Related papers (2024-09-25T12:32:41Z) - Understanding the Importance of Evolutionary Search in Automated Heuristic Design with Large Language Models [24.447539327343563]
Automated design (AHD) has gained considerable attention for its potential to automate the development of effectives.
The recent advent of large language models (LLMs) has paved a new avenue for AHD, with initial efforts focusing on framing AHD as an evolutionary program search problem.
arXiv Detail & Related papers (2024-07-15T16:21:20Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model [22.64392837434924]
EoH represents the ideas of thoughts in natural language, termed thoughts.
They are translated into executable codes by Large Language Models (LLMs)
EoH significantly outperforms widely-used human hand-crafted baseline algorithms for the online bin packing problem.
arXiv Detail & Related papers (2024-01-04T04:11:59Z) - Making LLaMA SEE and Draw with SEED Tokenizer [69.1083058794092]
We introduce SEED, an elaborate image tokenizer that empowers Large Language Models with the ability to SEE and Draw.
With SEED tokens, LLM is able to perform scalable multimodal autoregression under its original training recipe.
SEED-LLaMA has exhibited compositional emergent abilities such as multi-turn in-context multimodal generation.
arXiv Detail & Related papers (2023-10-02T14:03:02Z) - Connecting Large Language Models with Evolutionary Algorithms Yields
Powerful Prompt Optimizers [70.18534453485849]
EvoPrompt is a framework for discrete prompt optimization.
It borrows the idea of evolutionary algorithms (EAs) as they exhibit good performance and fast convergence.
It significantly outperforms human-engineered prompts and existing methods for automatic prompt generation.
arXiv Detail & Related papers (2023-09-15T16:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.