The Art of Being Difficult: Combining Human and AI Strengths to Find Adversarial Instances for Heuristics
- URL: http://arxiv.org/abs/2601.16849v1
- Date: Fri, 23 Jan 2026 15:56:31 GMT
- Title: The Art of Being Difficult: Combining Human and AI Strengths to Find Adversarial Instances for Heuristics
- Authors: Henri Nikoleit, Ankit Anand, Anurag Murty Naredla, Heiko Röglin,
- Abstract summary: We demonstrate the power of human-LLM collaboration in tackling open problems in theoretical computer science.<n>By iterating on FunSearch's outputs, we identify improved constructions for hierarchical $k$-median clustering, bin packing, the knapsack problem, and a generalization of Lovsz's gasoline problem.
- Score: 3.2263263383265017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We demonstrate the power of human-LLM collaboration in tackling open problems in theoretical computer science. Focusing on combinatorial optimization, we refine outputs from the FunSearch algorithm [Romera-Paredes et al., Nature 2023] to derive state-of-the-art lower bounds for standard heuristics. Specifically, we target the generation of adversarial instances where these heuristics perform poorly. By iterating on FunSearch's outputs, we identify improved constructions for hierarchical $k$-median clustering, bin packing, the knapsack problem, and a generalization of Lovász's gasoline problem - some of these have not seen much improvement for over a decade, despite intermittent attention. These results illustrate how expert oversight can effectively extrapolate algorithmic insights from LLM-based evolutionary methods to break long-standing barriers. Our findings demonstrate that while LLMs provide critical initial patterns, human expertise is essential for transforming these patterns into mathematically rigorous and insightful constructions. This work highlights that LLMs are a strong collaborative tool in mathematics and computer science research.
Related papers
- Algorithmic Prompt-Augmentation for Efficient LLM-Based Heuristic Design for A* Search [0.0]
Heuristic functions are essential to the performance of tree search algorithms as A*, where their accuracy and efficiency directly impact search outcomes.<n>Recent advances in large language models (LLMs) and evolutionary frameworks have opened the door to automated design.<n>We introduce a novel domain-agnostic prompt augmentation strategy that includes the A* code into the prompt to leverage in-context learning.
arXiv Detail & Related papers (2026-01-27T13:55:58Z) - SelfAI: Building a Self-Training AI System with LLM Agents [79.10991818561907]
SelfAI is a general multi-agent platform that combines a User Agent for translating high-level research objectives into standardized experimental configurations.<n>An Experiment Manager orchestrates parallel, fault-tolerant training across heterogeneous hardware while maintaining a structured knowledge base for continuous feedback.<n>Across regression, computer vision, scientific computing, medical imaging, and drug discovery benchmarks, SelfAI consistently achieves strong performance and reduces redundant trials.
arXiv Detail & Related papers (2025-11-29T09:18:39Z) - Experience-Guided Reflective Co-Evolution of Prompts and Heuristics for Automatic Algorithm Design [124.54166764570972]
Combinatorial optimization problems are traditionally tackled with handcrafted algorithms.<n>Recent progress has highlighted the potential of automatics design powered by large language models.<n>We propose the Experience-Evolution Reflective Co-Guided of Prompt and Heuristics (EvoPH) for automatic algorithm design.
arXiv Detail & Related papers (2025-09-29T09:24:09Z) - LLM-Based Instance-Driven Heuristic Bias In the Context of a Biased Random Key Genetic Algorithm [0.9214658764451348]
We introduce a novel framework that integrates Large Language Models (LLMs) with a Biased Random-Key Genetic Algorithm (BRKGA) to solve the NP-hard Longest Run Subsequence problem.<n>Our approach extends the instance-driven bias paradigm by introducing a human-LLM collaborative process to co-design and implement a set of computationally efficient metrics.<n>Results show that our top-performing hybrid, BRKGA+Llama-4-Maverick, achieves significant improvements over the baseline.
arXiv Detail & Related papers (2025-09-05T21:46:41Z) - Re-evaluating LLM-based Heuristic Search: A Case Study on the 3D Packing Problem [3.473102563471572]
Large Language Models can generate code for searchs, but their application has largely been confined to adjusting simple functions within human-crafted frameworks.<n>We tasked an LLM with building a complete solver for the constrained 3D Packing Problem.<n>Our findings highlight two major barriers to automated design with current LLMs.
arXiv Detail & Related papers (2025-09-02T13:18:47Z) - Efficient Heuristics Generation for Solving Combinatorial Optimization Problems Using Large Language Models [52.538586230181814]
Recent studies exploited Large Language Models (LLMs) to autonomously generates for solving Combinatorial Optimization Problems (COPs)<n>The absence of task-specific knowledge in prompts often leads LLMs to provide unspecific search directions, obstructing derivation of well-performings.<n>We propose the Hercules algorithm, which leverages our designed Core Abstraction Prompting (CAP) method to abstract the core components from elite HGs and incorporate them as prior knowledge in prompts.
arXiv Detail & Related papers (2025-05-19T02:20:46Z) - Evolutionary thoughts: integration of large language models and evolutionary algorithms [2.3633885460047765]
Large Language Models (LLMs) have unveiled remarkable capabilities in understanding and generating both natural language and code.<n>We propose an enhanced evolutionary search strategy that enables a more focused exploration of expansive solution spaces.
arXiv Detail & Related papers (2025-05-09T03:32:18Z) - Leveraging Large Language Models to Develop Heuristics for Emerging Optimization Problems [0.0]
Combinatorial optimization problems often rely on algorithms to generate efficient solutions.<n>Recent advances in artificial intelligence have demonstrated the potential to automate generation through evolutionary frameworks.<n>We propose the Contextual Evolution of Heuristics framework, which incorporates problem-specific descriptions to enhance in-context learning.
arXiv Detail & Related papers (2025-03-05T10:22:49Z) - Bridging Visualization and Optimization: Multimodal Large Language Models on Graph-Structured Combinatorial Optimization [56.17811386955609]
Graph-structured challenges are inherently difficult due to their nonlinear and intricate nature.<n>In this study, we propose transforming graphs into images to preserve their higher-order structural features accurately.<n>By combining the innovative paradigm powered by multimodal large language models with simple search techniques, we aim to develop a novel and effective framework.
arXiv Detail & Related papers (2025-01-21T08:28:10Z) - Evaluating LLMs' Mathematical and Coding Competency through Ontology-guided Interventions [47.83142414018448]
We focus on two popular reasoning tasks: arithmetic reasoning and code generation.
We introduce (i) a general ontology of perturbations for math and coding questions, (ii) a semi-automatic method to apply these perturbations, and (iii) two datasets.
We show a significant performance drop across all the models against perturbed questions.
arXiv Detail & Related papers (2024-01-17T18:13:07Z) - Unveiling the Limits of Learned Local Search Heuristics: Are You the
Mightiest of the Meek? [14.195843311387591]
We show that a simple learned based on Tabu Search surpasses state-of-the-art learneds in terms of performance and generalizability.
Our findings challenge prevailing assumptions and open up exciting avenues for future research.
arXiv Detail & Related papers (2023-10-30T20:16:42Z) - A black-box adversarial attack for poisoning clustering [78.19784577498031]
We propose a black-box adversarial attack for crafting adversarial samples to test the robustness of clustering algorithms.
We show that our attacks are transferable even against supervised algorithms such as SVMs, random forests, and neural networks.
arXiv Detail & Related papers (2020-09-09T18:19:31Z) - Provably Efficient Exploration for Reinforcement Learning Using
Unsupervised Learning [96.78504087416654]
Motivated by the prevailing paradigm of using unsupervised learning for efficient exploration in reinforcement learning (RL) problems, we investigate when this paradigm is provably efficient.
We present a general algorithmic framework that is built upon two components: an unsupervised learning algorithm and a noregret tabular RL algorithm.
arXiv Detail & Related papers (2020-03-15T19:23:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.