AutoOpt: A General Framework for Automatically Designing Metaheuristic
Optimization Algorithms with Diverse Structures
- URL: http://arxiv.org/abs/2204.00998v6
- Date: Thu, 4 May 2023 03:30:55 GMT
- Title: AutoOpt: A General Framework for Automatically Designing Metaheuristic
Optimization Algorithms with Diverse Structures
- Authors: Qi Zhao, Bai Yan, Xianglong Chen, Taiwei Hu, Shi Cheng, Yuhui Shi
- Abstract summary: This paper proposes a general framework, AutoOpt, for automatically designing metaheuristic algorithms with diverse structures.
A general algorithm prototype dedicated to covering the metaheuristic family as widely as possible.
A directed acyclic graph algorithm representation to fit the proposed prototype.
A graph representation embedding method offering an alternative compact form of the graph to be manipulated.
- Score: 15.140309031326076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Metaheuristics are widely recognized gradient-free solvers to hard problems
that do not meet the rigorous mathematical assumptions of conventional solvers.
The automated design of metaheuristic algorithms provides an attractive path to
relieve manual design effort and gain enhanced performance beyond human-made
algorithms. However, the specific algorithm prototype and linear algorithm
representation in the current automated design pipeline restrict the design
within a fixed algorithm structure, which hinders discovering novelties and
diversity across the metaheuristic family. To address this challenge, this
paper proposes a general framework, AutoOpt, for automatically designing
metaheuristic algorithms with diverse structures. AutoOpt contains three
innovations: (i) A general algorithm prototype dedicated to covering the
metaheuristic family as widely as possible. It promotes high-quality automated
design on different problems by fully discovering potentials and novelties
across the family. (ii) A directed acyclic graph algorithm representation to
fit the proposed prototype. Its flexibility and evolvability enable discovering
various algorithm structures in a single run of design, thus boosting the
possibility of finding high-performance algorithms. (iii) A graph
representation embedding method offering an alternative compact form of the
graph to be manipulated, which ensures AutoOpt's generality. Experiments on
numeral functions and real applications validate AutoOpt's efficiency and
practicability.
Related papers
- LLaMEA: A Large Language Model Evolutionary Algorithm for Automatically Generating Metaheuristics [0.023020018305241332]
This paper introduces a novel Large Language Model Evolutionary Algorithm (LLaMEA) framework.
Given a set of criteria and a task definition (the search space), LLaMEA iteratively generates, mutates and selects algorithms.
We show how this framework can be used to generate novel black-box metaheuristic optimization algorithms automatically.
arXiv Detail & Related papers (2024-05-30T15:10:59Z) - Automated Metaheuristic Algorithm Design with Autoregressive Learning [25.967262411437403]
Current automated methods design algorithms within a fixed structure and operate from scratch.
This paper proposes an autoregressive learning-based designer for automated design of metaheuristic algorithms.
arXiv Detail & Related papers (2024-05-06T12:36:17Z) - Deep Reinforcement Learning for Dynamic Algorithm Selection: A
Proof-of-Principle Study on Differential Evolution [27.607740475924448]
We propose a deep reinforcement learning-based dynamic algorithm selection framework to accomplish this task.
We employ a sophisticated deep neural network model to infer the optimal action, ensuring informed algorithm selections.
As a proof-of-principle study, we apply this framework to a group of Differential Evolution algorithms.
arXiv Detail & Related papers (2024-03-04T15:40:28Z) - Automated Design of Metaheuristic Algorithms: A Survey [16.5686507795359]
This paper presents a broad picture of automated design of metaheuristic algorithms.
With computing power to fully explore potential design choices, the automated design could reach and even surpass human-level design.
arXiv Detail & Related papers (2023-03-12T01:20:49Z) - Backpropagation of Unrolled Solvers with Folded Optimization [55.04219793298687]
The integration of constrained optimization models as components in deep networks has led to promising advances on many specialized learning tasks.
One typical strategy is algorithm unrolling, which relies on automatic differentiation through the operations of an iterative solver.
This paper provides theoretical insights into the backward pass of unrolled optimization, leading to a system for generating efficiently solvable analytical models of backpropagation.
arXiv Detail & Related papers (2023-01-28T01:50:42Z) - A distributed, plug-n-play algorithm for multi-robot applications with a
priori non-computable objective functions [2.2452191187045383]
In multi-robot applications, the user-defined objectives of the mission can be cast as a general optimization problem.
Standard gradient-descent-like algorithms are not applicable to these problems.
We introduce a new algorithm that carefully designs each robot's subcost function, the optimization of which can accomplish the overall team objective.
arXiv Detail & Related papers (2021-11-14T20:40:00Z) - Algorithm Selection on a Meta Level [58.720142291102135]
We introduce the problem of meta algorithm selection, which essentially asks for the best way to combine a given set of algorithm selectors.
We present a general methodological framework for meta algorithm selection as well as several concrete learning methods as instantiations of this framework.
arXiv Detail & Related papers (2021-07-20T11:23:21Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Towards Optimally Efficient Tree Search with Deep Learning [76.64632985696237]
This paper investigates the classical integer least-squares problem which estimates signals integer from linear models.
The problem is NP-hard and often arises in diverse applications such as signal processing, bioinformatics, communications and machine learning.
We propose a general hyper-accelerated tree search (HATS) algorithm by employing a deep neural network to estimate the optimal estimation for the underlying simplified memory-bounded A* algorithm.
arXiv Detail & Related papers (2021-01-07T08:00:02Z) - Generalization in portfolio-based algorithm selection [97.74604695303285]
We provide the first provable guarantees for portfolio-based algorithm selection.
We show that if the portfolio is large, overfitting is inevitable, even with an extremely simple algorithm selector.
arXiv Detail & Related papers (2020-12-24T16:33:17Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.