LLM-Driven Instance-Specific Heuristic Generation and Selection
- URL: http://arxiv.org/abs/2506.00490v2
- Date: Tue, 03 Jun 2025 03:22:26 GMT
- Title: LLM-Driven Instance-Specific Heuristic Generation and Selection
- Authors: Shaofeng Zhang, Shengcai Liu, Ning Lu, Jiahao Wu, Ji Liu, Yew-Soon Ong, Ke Tang,
- Abstract summary: In this paper, we propose InstSpecHH, a novel framework that introduces the concept of instance-specific partitions generation.<n>InstSpecHH achieves the overall problem class into sub-classes based on instance features and performs differentiated, automated design for each problem subclass.
- Score: 44.643538268800796
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Combinatorial optimization problems are widely encountered in real-world applications. Designing high-quality heuristic algorithms that efficiently approximate optimal solutions within reasonable time is a critical research challenge. In recent years, many works have explored integrating Large Language Models (LLMs) with Evolutionary Algorithms to automate heuristic algorithm design through prompt engineering. However, these approaches generally adopt a problem-specific paradigm, applying a single algorithm across all problem instances, failing to account for the heterogeneity across instances. In this paper, we propose InstSpecHH, a novel framework that introduces the concept of instance-specific heuristic generation. InstSpecHH partitions the overall problem class into sub-classes based on instance features and performs differentiated, automated heuristic design for each problem subclass. By tailoring heuristics to the unique features of different sub-classes, InstSpecHH achieves better performance at the problem class level while avoiding redundant heuristic generation for similar instances, thus reducing computational overhead. This approach effectively balances the trade-off between the cost of automatic heuristic design and the quality of the obtained solutions. To evaluate the performance of InstSpecHH, we conduct experiments on 4,500 subclasses of the Online Bin Packing Problem (OBPP) and 365 subclasses of the Capacitated Vehicle Routing Problem (CVRP). Experimental results show that InstSpecHH demonstrates strong intra-subclass and inter-subclass generalization capabilities. Compared to previous problem-specific methods, InstSpecHH reduces the average optimality gap by more than 5.6\% for OBPP and 0.9\% for CVRP. These results highlight the potential of instance-aware automatic heuristic design to further enhance solution quality.
Related papers
- METAFOR: A Hybrid Metaheuristics Software Framework for Single-Objective Continuous Optimization Problems [0.1053373860696675]
We propose a modular metaheuristic software framework, called METAFOR, that can be coupled with an automatic algorithm configuration tool to automatically design hybrid metaheuristics.<n>We use the configuration tool irace to automatically generate 17 different metaheuristic implementations and evaluate their performance on a diverse set of continuous optimization problems.
arXiv Detail & Related papers (2025-02-16T18:24:44Z) - A Benchmarking Environment for Worker Flexibility in Flexible Job Shop Scheduling Problems [0.0]
In Production Scheduling, the Flexible Job Shop Scheduling Problem (FJSSP) aims to optimize a sequence of operations and assign each to an eligible machine with varying processing times.<n>The resulting problem is called Flexible Job Shop Scheduling Problem with Worker Flexibility (FJSSP-W)<n>This paper presents a collection of 402 commonly accepted FJSSP instances and proposes an approach to extend these with worker flexibility.
arXiv Detail & Related papers (2025-01-27T15:56:12Z) - Evolving Generalizable Parallel Algorithm Portfolios via Domain-Agnostic Instance Generation [27.138566940625385]
Generalization is the core objective when trainings from data.<n>Co-evolutionary approaches address this challenge by simultaneously evolving a parallel algorithm portfolio (PAP) and an instance population.<n>This work proposes a general-purpose, off-the-shelf PAP construction approach, named domain-agnostic co-evolution of parameterized search (DACE)
arXiv Detail & Related papers (2025-01-06T10:29:48Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Class-Specific Variational Auto-Encoder for Content-Based Image
Retrieval [95.42181254494287]
We propose a regularized loss for Variational Auto-Encoders (VAEs) forcing the model to focus on a given class of interest.
As a result, the model learns to discriminate the data belonging to the class of interest from any other possibility.
Experimental results show that the proposed method outperforms its competition in both in-domain and out-of-domain retrieval problems.
arXiv Detail & Related papers (2023-04-23T19:51:25Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - Reinforcement Learning for Branch-and-Bound Optimisation using
Retrospective Trajectories [72.15369769265398]
Machine learning has emerged as a promising paradigm for branching.
We propose retro branching; a simple yet effective approach to RL for branching.
We outperform the current state-of-the-art RL branching algorithm by 3-5x and come within 20% of the best IL method's performance on MILPs with 500 constraints and 1000 variables.
arXiv Detail & Related papers (2022-05-28T06:08:07Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Linear Matrix Factorization Embeddings for Single-objective Optimization
Landscapes [0.755972004983746]
In black-box optimization, features have to be derived from a set of $(x,f(x))$ samples.
We show that a linear dimensionality reduction via matrix factorization significantly contributes towards a better detection of correlation between different problem instances.
arXiv Detail & Related papers (2020-09-30T08:46:54Z) - Anytime Behavior of Inexact TSP Solvers and Perspectives for Automated
Algorithm Selection [0.0]
Traveling-Salesperson-Problem (TSP) is arguably one of the best-known NP-hard optimization problems.
We extend existing benchmarking studies by addressing anytime behaviour of inexact TSP solvers.
It turns out that performance ranking of solvers is highly dependent on the focused approximation quality.
arXiv Detail & Related papers (2020-05-27T11:36:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.