Automated Algorithm Design for Auto-Tuning Optimizers
- URL: http://arxiv.org/abs/2510.17899v1
- Date: Sun, 19 Oct 2025 09:38:15 GMT
- Title: Automated Algorithm Design for Auto-Tuning Optimizers
- Authors: Floris-Jan Willemsen, Niki van Stein, Ben van Werkhoven,
- Abstract summary: We introduce a new paradigm: using large language models to automatically generate optimization algorithms tailored to auto-tuning problems.<n>We evaluate these algorithms on four real-world auto-tuning applications across six hardware platforms.<n>Our best-performing generated optimization algorithms achieve, on average, 72.4% improvement over state-of-the-art parameters for auto-tuning.
- Score: 0.3459227740065624
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic performance tuning (auto-tuning) is essential for optimizing high-performance applications, where vast and irregular parameter spaces make manual exploration infeasible. Traditionally, auto-tuning relies on well-established optimization algorithms such as evolutionary algorithms, annealing methods, or surrogate model-based optimizers to efficiently find near-optimal configurations. However, designing effective optimizers remains challenging, as no single method performs best across all tuning tasks. In this work, we explore a new paradigm: using large language models (LLMs) to automatically generate optimization algorithms tailored to auto-tuning problems. We introduce a framework that prompts LLMs with problem descriptions and search-space characteristics results to produce specialized optimization strategies, which are iteratively examined and improved. These generated algorithms are evaluated on four real-world auto-tuning applications across six hardware platforms and compared against the state-of-the-art in optimization algorithms of two contemporary auto-tuning frameworks. The evaluation demonstrates that providing additional application- and search space-specific information in the generation stage results in an average performance improvement of 30.7\% and 14.6\%, respectively. In addition, our results show that LLM-generated optimizers can rival, and in various cases outperform, existing human-designed algorithms, with our best-performing generated optimization algorithms achieving, on average, 72.4\% improvement over state-of-the-art optimizers for auto-tuning.
Related papers
- Optimizing Optimizers for Fast Gradient-Based Learning [53.81268610971847]
We lay the theoretical foundation for automating design in gradient-based learning.<n>By treating gradient loss signals as a function that translates to parameter motions, the problem reduces to a family of convex optimization problems.
arXiv Detail & Related papers (2025-12-06T09:50:41Z) - Automated Design Optimization via Strategic Search with Large Language Models [0.0]
AUTO is a framework that treats design optimization as a gradient-free search problem guided by strategic LLM reasoning.<n>It completes optimizations in approximately 8 hours at an estimated cost of up to $159 per run, compared to an estimated cost of up to $480 with median-wage software developers.
arXiv Detail & Related papers (2025-11-27T17:42:05Z) - Tuning the Tuner: Introducing Hyperparameter Optimization for Auto-Tuning [0.0]
We show that even limited hyper parameter tuning can improve auto-tuner performance by 94.8% on average.<n>We establish that the hyper parameters themselves can be optimized efficiently with meta-strategies.
arXiv Detail & Related papers (2025-09-30T14:14:01Z) - Evolution of Optimization Algorithms for Global Placement via Large Language Models [18.373855320220887]
This paper presents an automated framework to evolve optimization algorithms for global placement.<n>We first generate diverse candidate algorithms using large language models (LLM) through carefully crafted prompts.<n>The discovered optimization algorithms exhibit substantial performance improvements across many benchmarks.
arXiv Detail & Related papers (2025-04-18T09:57:14Z) - From Understanding to Excelling: Template-Free Algorithm Design through Structural-Functional Co-Evolution [39.42526347710991]
Large language models (LLMs) have greatly accelerated the automation of algorithm generation and optimization.<n>We introduce an end-to-end algorithm generation and optimization framework based on LLMs.<n>Our approach utilizes the deep semantic understanding of LLMs to convert natural language requirements or human-authored papers into code solutions.
arXiv Detail & Related papers (2025-03-13T08:26:18Z) - Self-Steering Optimization: Autonomous Preference Optimization for Large Language Models [79.84205827056907]
We present Self-Steering Optimization ($SSO$), an algorithm that autonomously generates high-quality preference data.<n>$SSO$ employs a specialized optimization objective to build a data generator from the policy model itself, which is used to produce accurate and on-policy data.<n>Our evaluation shows that $SSO$ consistently outperforms baselines in human preference alignment and reward optimization.
arXiv Detail & Related papers (2024-10-22T16:04:03Z) - A Problem-Oriented Perspective and Anchor Verification for Code Optimization [43.28045750932116]
Large language models (LLMs) have shown remarkable capabilities in solving various programming tasks.<n>This paper investigates the capabilities of LLMs in optimizing code for minimal execution time.
arXiv Detail & Related papers (2024-06-17T16:10:10Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Localized Zeroth-Order Prompt Optimization [54.964765668688806]
We propose a novel algorithm, namely localized zeroth-order prompt optimization (ZOPO)
ZOPO incorporates a Neural Tangent Kernel-based derived Gaussian process into standard zeroth-order optimization for an efficient search of well-performing local optima in prompt optimization.
Remarkably, ZOPO outperforms existing baselines in terms of both the optimization performance and the query efficiency.
arXiv Detail & Related papers (2024-03-05T14:18:15Z) - An Empirical Evaluation of Zeroth-Order Optimization Methods on
AI-driven Molecule Optimization [78.36413169647408]
We study the effectiveness of various ZO optimization methods for optimizing molecular objectives.
We show the advantages of ZO sign-based gradient descent (ZO-signGD)
We demonstrate the potential effectiveness of ZO optimization methods on widely used benchmark tasks from the Guacamol suite.
arXiv Detail & Related papers (2022-10-27T01:58:10Z) - Efficient Non-Parametric Optimizer Search for Diverse Tasks [93.64739408827604]
We present the first efficient scalable and general framework that can directly search on the tasks of interest.
Inspired by the innate tree structure of the underlying math expressions, we re-arrange the spaces into a super-tree.
We adopt an adaptation of the Monte Carlo method to tree search, equipped with rejection sampling and equivalent- form detection.
arXiv Detail & Related papers (2022-09-27T17:51:31Z) - AutoOpt: A General Framework for Automatically Designing Metaheuristic Optimization Algorithms with Diverse Structures [22.624811044236516]
This paper proposes a general framework, AutoOpt, for automatically designing metaheuristic algorithms with diverse structures.<n>A general algorithm prototype dedicated to covering the metaheuristic family as widely as possible.<n>A directed acyclic graph algorithm representation to fit the proposed prototype.<n>A graph representation embedding method offering an alternative compact form of the graph to be manipulated.
arXiv Detail & Related papers (2022-04-03T05:31:56Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.