QUBE: Enhancing Automatic Heuristic Design via Quality-Uncertainty Balanced Evolution
- URL: http://arxiv.org/abs/2412.20694v4
- Date: Fri, 21 Feb 2025 03:28:49 GMT
- Title: QUBE: Enhancing Automatic Heuristic Design via Quality-Uncertainty Balanced Evolution
- Authors: Zijie Chen, Zhanchao Zhou, Yu Lu, Renjun Xu, Lili Pan, Zhenzhong Lan,
- Abstract summary: Quality-Uncertainty Balanced Evolution (QUBE) is a novel approach that enhances LLM+EA methods by redefining the priority criterion within the FunSearch framework.<n>QUBE employs the Quality-Uncertainty Trade-off Criterion (QUTC), based on our proposed Uncertainty-Inclusive Quality metric.<n>Through extensive experiments on challenging NP-complete problems, QUBE demonstrates significant performance improvements over FunSearch and baseline methods.
- Score: 14.131178103518907
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Solving NP-hard problems traditionally relies on heuristics, yet manually designing effective heuristics for complex problems remains a significant challenge. While recent advancements like FunSearch have shown that large language models (LLMs) can be integrated into evolutionary algorithms (EAs) for heuristic design, their potential is hindered by limitations in balancing exploitation and exploration. We introduce Quality-Uncertainty Balanced Evolution (QUBE), a novel approach that enhances LLM+EA methods by redefining the priority criterion within the FunSearch framework. QUBE employs the Quality-Uncertainty Trade-off Criterion (QUTC), based on our proposed Uncertainty-Inclusive Quality metric, to evaluate and guide the evolutionary process. Through extensive experiments on challenging NP-complete problems, QUBE demonstrates significant performance improvements over FunSearch and baseline methods. Our code are available at https://github.com/zzjchen/QUBE_code.
Related papers
- Vector Quantized-Elites: Unsupervised and Problem-Agnostic Quality-Diversity Optimization [0.0]
We introduce Vector Quantized-Elites (VQ-Elites), a novel Quality-Diversity algorithm that autonomously constructs a structured behavioral space grid.
At the core of VQ-Elites is the integration of Vector Quantized Variational Autoencoders, which enables the dynamic learning of behavioral descriptors.
We validate VQ-Elites on robotic arm pose-reaching and mobile robot space-covering tasks.
arXiv Detail & Related papers (2025-04-10T18:23:19Z) - Thinking Longer, Not Larger: Enhancing Software Engineering Agents via Scaling Test-Time Compute [61.00662702026523]
We propose a unified Test-Time Compute scaling framework that leverages increased inference-time instead of larger models.
Our framework incorporates two complementary strategies: internal TTC and external TTC.
We demonstrate our textbf32B model achieves a 46% issue resolution rate, surpassing significantly larger models such as DeepSeek R1 671B and OpenAI o1.
arXiv Detail & Related papers (2025-03-31T07:31:32Z) - Offline Model-Based Optimization: Comprehensive Review [61.91350077539443]
offline optimization is a fundamental challenge in science and engineering, where the goal is to optimize black-box functions using only offline datasets.
Recent advances in model-based optimization have harnessed the generalization capabilities of deep neural networks to develop offline-specific surrogate and generative models.
Despite its growing impact in accelerating scientific discovery, the field lacks a comprehensive review.
arXiv Detail & Related papers (2025-03-21T16:35:02Z) - Leveraging Large Language Models to Develop Heuristics for Emerging Optimization Problems [0.0]
Combinatorial optimization problems often rely on algorithms to generate efficient solutions.
Recent advances in artificial intelligence have demonstrated the potential to automate generation through evolutionary frameworks.
We propose the Contextual Evolution of Heuristics framework, which incorporates problem-specific descriptions to enhance in-context learning.
arXiv Detail & Related papers (2025-03-05T10:22:49Z) - HSEvo: Elevating Automatic Heuristic Design with Diversity-Driven Harmony Search and Genetic Algorithm Using LLMs [7.04316974339151]
Heuristic Design is an active research area due to its utility in solving complex search and NP-hard optimization problems.<n>We introduce HSEvo, an adaptive LLM-EPS framework that maintains a balance between diversity and convergence with a harmony search.
arXiv Detail & Related papers (2024-12-19T16:07:00Z) - Lingma SWE-GPT: An Open Development-Process-Centric Language Model for Automated Software Improvement [62.94719119451089]
Lingma SWE-GPT series learns from and simulating real-world code submission activities.
Lingma SWE-GPT 72B resolves 30.20% of GitHub issues, marking a significant improvement in automatic issue resolution.
arXiv Detail & Related papers (2024-11-01T14:27:16Z) - Multi-objective Evolution of Heuristic Using Large Language Model [29.337470185034555]
Heuristics are commonly used to tackle diverse search and optimization problems.
Recent works have incorporated large language models (LLMs) into automatic search leveraging their powerful language and coding capacity.
We propose to model search as a multi-objective optimization problem and consider introducing other practical criteria beyond optimal performance.
arXiv Detail & Related papers (2024-09-25T12:32:41Z) - Tuning-Free Accountable Intervention for LLM Deployment -- A
Metacognitive Approach [55.613461060997004]
Large Language Models (LLMs) have catalyzed transformative advances across a spectrum of natural language processing tasks.
We propose an innovative textitmetacognitive approach, dubbed textbfCLEAR, to equip LLMs with capabilities for self-aware error identification and correction.
arXiv Detail & Related papers (2024-03-08T19:18:53Z) - When large language models meet evolutionary algorithms [48.213640761641926]
Pre-trained large language models (LLMs) have powerful capabilities for generating creative natural text.
Evolutionary algorithms (EAs) can discover diverse solutions to complex real-world problems.
Motivated by the common collective and directionality of text generation and evolution, this paper illustrates the parallels between LLMs and EAs.
arXiv Detail & Related papers (2024-01-19T05:58:30Z) - Evolution of Heuristics: Towards Efficient Automatic Algorithm Design Using Large Language Model [22.64392837434924]
EoH represents the ideas of thoughts in natural language, termed thoughts.
They are translated into executable codes by Large Language Models (LLMs)
EoH significantly outperforms widely-used human hand-crafted baseline algorithms for the online bin packing problem.
arXiv Detail & Related papers (2024-01-04T04:11:59Z) - Large Language Models as Evolutionary Optimizers [37.92671242584431]
We present the first study on large language models (LLMs) as evolutionarys.
The main advantage is that it requires minimal domain knowledge and human efforts, as well as no additional training of the model.
We also study the effectiveness of the self-adaptation mechanism in evolutionary search.
arXiv Detail & Related papers (2023-10-29T15:44:52Z) - SEGO: Sequential Subgoal Optimization for Mathematical Problem-Solving [64.38649623473626]
Large Language Models (LLMs) have driven substantial progress in artificial intelligence.
We propose a novel framework called textbfSEquential subtextbfGoal textbfOptimization (SEGO) to enhance LLMs' ability to solve mathematical problems.
arXiv Detail & Related papers (2023-10-19T17:56:40Z) - Multiobjective Evolutionary Component Effect on Algorithm behavior [0.04588028371034406]
It is unknown what are the most influential components that lead to performance improvements.
We apply this methodology to a tuned Multiobjective Evolutionary Algorithm based on Decomposition (MOEA/D) designed by the iterated racing (irace) configuration package.
We compare the impact of the algorithm components in terms of their Search Trajectory Networks (STNs), the diversity of the population, and the anytime hypervolume values.
arXiv Detail & Related papers (2023-07-31T16:02:56Z) - OpenAGI: When LLM Meets Domain Experts [51.86179657467822]
Human Intelligence (HI) excels at combining basic skills to solve complex tasks.
This capability is vital for Artificial Intelligence (AI) and should be embedded in comprehensive AI Agents.
We introduce OpenAGI, an open-source platform designed for solving multi-step, real-world tasks.
arXiv Detail & Related papers (2023-04-10T03:55:35Z) - Simulation-guided Beam Search for Neural Combinatorial Optimization [13.072343634530883]
We propose simulation-guided beam search (SGBS) for neural optimization problems.
We hybridize SGBS with efficient active search (EAS), where SGBS enhances the quality of solutions backpropagated in EAS.
We evaluate our methods on well-known CO benchmarks and show that SGBS significantly improves the quality of the solutions found under reasonable assumptions.
arXiv Detail & Related papers (2022-07-13T13:34:35Z) - AutoBERT-Zero: Evolving BERT Backbone from Scratch [94.89102524181986]
We propose an Operation-Priority Neural Architecture Search (OP-NAS) algorithm to automatically search for promising hybrid backbone architectures.
We optimize both the search algorithm and evaluation of candidate models to boost the efficiency of our proposed OP-NAS.
Experiments show that the searched architecture (named AutoBERT-Zero) significantly outperforms BERT and its variants of different model capacities in various downstream tasks.
arXiv Detail & Related papers (2021-07-15T16:46:01Z) - AutoSpace: Neural Architecture Search with Less Human Interference [84.42680793945007]
Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction.
We propose a novel differentiable evolutionary framework named AutoSpace, which evolves the search space to an optimal one.
With the learned search space, the performance of recent NAS algorithms can be improved significantly compared with using previously manually designed spaces.
arXiv Detail & Related papers (2021-03-22T13:28:56Z) - Offline Model-Based Optimization via Normalized Maximum Likelihood
Estimation [101.22379613810881]
We consider data-driven optimization problems where one must maximize a function given only queries at a fixed set of points.
This problem setting emerges in many domains where function evaluation is a complex and expensive process.
We propose a tractable approximation that allows us to scale our method to high-capacity neural network models.
arXiv Detail & Related papers (2021-02-16T06:04:27Z) - Reinforcement Learning for Flexibility Design Problems [77.37213643948108]
We develop a reinforcement learning framework for flexibility design problems.
Empirical results show that the RL-based method consistently finds better solutions than classical methods.
arXiv Detail & Related papers (2021-01-02T02:44:39Z) - AdaLead: A simple and robust adaptive greedy search algorithm for
sequence design [55.41644538483948]
We develop an easy-to-directed, scalable, and robust evolutionary greedy algorithm (AdaLead)
AdaLead is a remarkably strong benchmark that out-competes more complex state of the art approaches in a variety of biologically motivated sequence design challenges.
arXiv Detail & Related papers (2020-10-05T16:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.