ClustOpt: A Clustering-based Approach for Representing and Visualizing the Search Dynamics of Numerical Metaheuristic Optimization Algorithms
- URL: http://arxiv.org/abs/2507.02337v1
- Date: Thu, 03 Jul 2025 06:01:02 GMT
- Title: ClustOpt: A Clustering-based Approach for Representing and Visualizing the Search Dynamics of Numerical Metaheuristic Optimization Algorithms
- Authors: Gjorgjina Cenikj, Gašper Petelin, Tome Eftimov,
- Abstract summary: We propose a novel representation and visualization methodology that clusters solution candidates explored by the algorithm.<n>We quantify the consistency of search trajectories across runs of an individual algorithm and the similarity between different algorithms.<n>We apply this methodology to a set of ten numerical metaheuristic algorithms, revealing insights into their stability and comparative behaviors.
- Score: 4.740182373135037
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Understanding the behavior of numerical metaheuristic optimization algorithms is critical for advancing their development and application. Traditional visualization techniques, such as convergence plots, trajectory mapping, and fitness landscape analysis, often fall short in illustrating the structural dynamics of the search process, especially in high-dimensional or complex solution spaces. To address this, we propose a novel representation and visualization methodology that clusters solution candidates explored by the algorithm and tracks the evolution of cluster memberships across iterations, offering a dynamic and interpretable view of the search process. Additionally, we introduce two metrics - algorithm stability and algorithm similarity- to quantify the consistency of search trajectories across runs of an individual algorithm and the similarity between different algorithms, respectively. We apply this methodology to a set of ten numerical metaheuristic algorithms, revealing insights into their stability and comparative behaviors, thereby providing a deeper understanding of their search dynamics.
Related papers
- Comparing Optimization Algorithms Through the Lens of Search Behavior Analysis [4.740182373135037]
We investigate the applicability of statistical tests for comparing algorithms based on their search behavior.<n>We assess the solutions produced by 114 algorithms from the MEALPY library.<n>These findings are incorporated into an empirical analysis aiming to identify algorithms with similar search behaviors.
arXiv Detail & Related papers (2025-07-02T12:51:27Z) - Graph-Supported Dynamic Algorithm Configuration for Multi-Objective Combinatorial Optimization [5.481047026874548]
This paper presents a novel graph neural network (GNN) based DRL to configure multi-objective evolutionary algorithms.<n>We model the dynamic algorithm configuration as a Markov decision process, representing the convergence of solutions in the objective space by a graph.<n> Experiments on diverse MOCO challenges indicate that our method outperforms traditional and DRL-based algorithm configuration methods in terms of efficacy and adaptability.
arXiv Detail & Related papers (2025-05-22T09:53:54Z) - A Survey of Meta-features Used for Automated Selection of Algorithms for Black-box Single-objective Continuous Optimization [4.173197621837912]
We conduct an overview of the key contributions to algorithm selection in the field of single-objective continuous black-box optimization.
We study machine learning models for automated algorithm selection, configuration, and performance prediction.
arXiv Detail & Related papers (2024-06-08T11:11:14Z) - Deep Reinforcement Learning for Dynamic Algorithm Selection: A
Proof-of-Principle Study on Differential Evolution [27.607740475924448]
We propose a deep reinforcement learning-based dynamic algorithm selection framework to accomplish this task.
We employ a sophisticated deep neural network model to infer the optimal action, ensuring informed algorithm selections.
As a proof-of-principle study, we apply this framework to a group of Differential Evolution algorithms.
arXiv Detail & Related papers (2024-03-04T15:40:28Z) - The Algorithm Configuration Problem [0.8075866265341176]
This article focuses on optimizing parametrized algorithms for solving specific instances of decision/optimization problems.
We present a comprehensive framework that not only formalizes the Algorithm Configuration Problem, but also outlines different approaches for its resolution.
arXiv Detail & Related papers (2024-03-01T17:29:34Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Neural Combinatorial Optimization: a New Player in the Field [69.23334811890919]
This paper presents a critical analysis on the incorporation of algorithms based on neural networks into the classical optimization framework.
A comprehensive study is carried out to analyse the fundamental aspects of such algorithms, including performance, transferability, computational cost and to larger-sized instances.
arXiv Detail & Related papers (2022-05-03T07:54:56Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - A survey on multi-objective hyperparameter optimization algorithms for
Machine Learning [62.997667081978825]
This article presents a systematic survey of the literature published between 2014 and 2020 on multi-objective HPO algorithms.
We distinguish between metaheuristic-based algorithms, metamodel-based algorithms, and approaches using a mixture of both.
We also discuss the quality metrics used to compare multi-objective HPO procedures and present future research directions.
arXiv Detail & Related papers (2021-11-23T10:22:30Z) - Fractal Structure and Generalization Properties of Stochastic
Optimization Algorithms [71.62575565990502]
We prove that the generalization error of an optimization algorithm can be bounded on the complexity' of the fractal structure that underlies its generalization measure.
We further specialize our results to specific problems (e.g., linear/logistic regression, one hidden/layered neural networks) and algorithms.
arXiv Detail & Related papers (2021-06-09T08:05:36Z) - Extreme Algorithm Selection With Dyadic Feature Representation [78.13985819417974]
We propose the setting of extreme algorithm selection (XAS) where we consider fixed sets of thousands of candidate algorithms.
We assess the applicability of state-of-the-art AS techniques to the XAS setting and propose approaches leveraging a dyadic feature representation.
arXiv Detail & Related papers (2020-01-29T09:40:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.