G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic Design
- URL: http://arxiv.org/abs/2602.08253v1
- Date: Mon, 09 Feb 2026 04:13:35 GMT
- Title: G-LNS: Generative Large Neighborhood Search for LLM-Based Automatic Heuristic Design
- Authors: Baoyun Zhao, He Wang, Liang Zeng,
- Abstract summary: We propose a generative evolutionary framework that extends Large Neighborhood Search (LNS) operators.<n>G-LNS leverages Large Language Models (LLMs) to co-evolve tightly coupled pairs of destroy and repair operators.<n>Experiments show that G-LNS significantly outperforms LLM-based AHD methods as well as strong classical solvers.
- Score: 7.681872137995253
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While Large Language Models (LLMs) have recently shown promise in Automated Heuristic Design (AHD), existing approaches typically formulate AHD around constructive priority rules or parameterized local search guidance, thereby restricting the search space to fixed heuristic forms. Such designs offer limited capacity for structural exploration, making it difficult to escape deep local optima in complex Combinatorial Optimization Problems (COPs). In this work, we propose G-LNS, a generative evolutionary framework that extends LLM-based AHD to the automated design of Large Neighborhood Search (LNS) operators. Unlike prior methods that evolve heuristics in isolation, G-LNS leverages LLMs to co-evolve tightly coupled pairs of destroy and repair operators. A cooperative evaluation mechanism explicitly captures their interaction, enabling the discovery of complementary operator logic that jointly performs effective structural disruption and reconstruction. Extensive experiments on challenging COP benchmarks, such as Traveling Salesman Problems (TSP) and Capacitated Vehicle Routing Problems (CVRP), demonstrate that G-LNS significantly outperforms LLM-based AHD methods as well as strong classical solvers. The discovered heuristics not only achieve near-optimal solutions with reduced computational budgets but also exhibit robust generalization across diverse and unseen instance distributions.
Related papers
- Closed-Loop LLM Discovery of Non-Standard Channel Priors in Vision Models [48.83701310501069]
Large Language Models (LLMs) offer a transformative approach to Neural Architecture Search (NAS)<n>We formulate the search as a sequence of conditional code generation tasks, where an LLM refines architectural specifications based on performance telemetry.<n>We generate a vast corpus of valid, shape-consistent architectures via Abstract Syntax Tree (AST) mutations.<n> Experimental results on CIFAR-100 validate the efficacy of this approach, demonstrating that the model yields statistically significant improvements in accuracy.
arXiv Detail & Related papers (2026-01-13T13:00:30Z) - Refining Hybrid Genetic Search for CVRP via Reinforcement Learning-Finetuned LLM [32.938753667649074]
Large language models (LLMs) are increasingly used as automated designers for vehicle routing problems (VRPs)<n>This work challenges that paradigm by demonstrating that a smaller, specialized LLM, when meticulously fine-tuned, can generate components that surpass expert-crafteds within advanced solvers.<n>We propose RFTHGS, a novel Reinforcement learning framework for Fine-Tuning a small LLM to generate high-performance crossover operators.
arXiv Detail & Related papers (2025-10-13T08:08:58Z) - Dynamic Generation of Multi-LLM Agents Communication Topologies with Graph Diffusion Models [99.85131798240808]
We introduce a novel generative framework called textitGuided Topology Diffusion (GTD)<n>Inspired by conditional discrete graph diffusion models, GTD formulates topology synthesis as an iterative construction process.<n>At each step, the generation is steered by a lightweight proxy model that predicts multi-objective rewards.<n>Experiments show that GTD can generate highly task-adaptive, sparse, and efficient communication topologies.
arXiv Detail & Related papers (2025-10-09T05:28:28Z) - Experience-Guided Reflective Co-Evolution of Prompts and Heuristics for Automatic Algorithm Design [124.54166764570972]
Combinatorial optimization problems are traditionally tackled with handcrafted algorithms.<n>Recent progress has highlighted the potential of automatics design powered by large language models.<n>We propose the Experience-Evolution Reflective Co-Guided of Prompt and Heuristics (EvoPH) for automatic algorithm design.
arXiv Detail & Related papers (2025-09-29T09:24:09Z) - Re-evaluating LLM-based Heuristic Search: A Case Study on the 3D Packing Problem [3.473102563471572]
Large Language Models can generate code for searchs, but their application has largely been confined to adjusting simple functions within human-crafted frameworks.<n>We tasked an LLM with building a complete solver for the constrained 3D Packing Problem.<n>Our findings highlight two major barriers to automated design with current LLMs.
arXiv Detail & Related papers (2025-09-02T13:18:47Z) - LLM4CMO: Large Language Model-aided Algorithm Design for Constrained Multiobjective Optimization [54.35609820607923]
Large language models (LLMs) offer new opportunities for assisting with algorithm design.<n>We propose LLM4CMO, a novel CMOEA based on a dual-population, two-stage framework.<n>LLMs can serve as efficient co-designers in the development of complex evolutionary optimization algorithms.
arXiv Detail & Related papers (2025-08-16T02:00:57Z) - Agentic Reinforced Policy Optimization [66.96989268893932]
Large-scale reinforcement learning with verifiable rewards (RLVR) has demonstrated its effectiveness in harnessing the potential of large language models (LLMs) for single-turn reasoning tasks.<n>Current RL algorithms inadequately balance the models' intrinsic long-horizon reasoning capabilities and their proficiency in multi-turn tool interactions.<n>We propose Agentic Reinforced Policy Optimization (ARPO), a novel agentic RL algorithm tailored for training multi-turn LLM-based agents.
arXiv Detail & Related papers (2025-07-26T07:53:11Z) - HeurAgenix: Leveraging LLMs for Solving Complex Combinatorial Optimization Challenges [10.088078143772563]
Heuristic algorithms play a vital role in solving optimization (CO) problems.<n>HeurAgenix is a two-stage hyper-heuristic framework powered by large language models (LLMs)
arXiv Detail & Related papers (2025-06-18T07:20:01Z) - RedAHD: Reduction-Based End-to-End Automatic Heuristic Design with Large Language Models [14.544461392180668]
We propose a novel end-to-end framework, named RedAHD, that enables these LLM-based design methods to operate without the need of humans.<n>More specifically, RedAHD employs LLMs to automate the process of reduction, i.e., transforming the COP at hand into similar COPs that are better-understood.<n>Our experimental results, evaluated on six COPs, show that RedAHD is capable of designing or improved results over the state-of-the-art methods with minimal human involvement.
arXiv Detail & Related papers (2025-05-26T17:21:16Z) - Multi-Agent Collaboration via Evolving Orchestration [55.574417128944226]
Large language models (LLMs) have achieved remarkable results across diverse downstream tasks, but their monolithic nature restricts scalability and efficiency in complex problem-solving.<n>We propose a puppeteer-style paradigm for LLM-based multi-agent collaboration, where a centralized orchestrator ("puppeteer") dynamically directs agents ("puppets") in response to evolving task states.<n> Experiments on closed- and open-domain scenarios show that this method achieves superior performance with reduced computational costs.
arXiv Detail & Related papers (2025-05-26T07:02:17Z) - R1-Searcher: Incentivizing the Search Capability in LLMs via Reinforcement Learning [87.30285670315334]
textbfR1-Searcher is a novel two-stage outcome-based RL approach designed to enhance the search capabilities of Large Language Models.<n>Our framework relies exclusively on RL, without requiring process rewards or distillation for a cold start.<n>Our experiments demonstrate that our method significantly outperforms previous strong RAG methods, even when compared to the closed-source GPT-4o-mini.
arXiv Detail & Related papers (2025-03-07T17:14:44Z) - Can Large Language Models Be Trusted as Evolutionary Optimizers for Network-Structured Combinatorial Problems? [8.431866560904753]
Large Language Models (LLMs) have shown strong capabilities in language understanding and reasoning across diverse domains.<n>In this work, we propose a systematic framework to evaluate the capability of LLMs to engage with problem structures.<n>We adopt the commonly used evolutionary (EVO) and propose a comprehensive evaluation framework that rigorously assesses the output fidelity of LLM-based operators.
arXiv Detail & Related papers (2025-01-25T05:19:19Z) - Monte Carlo Tree Search for Comprehensive Exploration in LLM-Based Automatic Heuristic Design [33.58608225370497]
Large Language Model (LLM)-based automatic design (AHD) methods have shown promise in generating high-quality designs without manual intervention.<n>This paper proposes to use Monte Carlo Tree Search (MCTS) for evolutionary evolution.
arXiv Detail & Related papers (2025-01-15T06:00:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.