Online Cluster-Based Parameter Control for Metaheuristic
- URL: http://arxiv.org/abs/2504.05144v1
- Date: Mon, 07 Apr 2025 14:48:30 GMT
- Title: Online Cluster-Based Parameter Control for Metaheuristic
- Authors: Vasileios A. Tatsis, Dimos Ioannidis,
- Abstract summary: The present work proposes a general-purpose online parameter-tuning method called Cluster-Based Adaptation (CPA) for population-based metaheuristics.<n>The main idea lies in the identification of promising areas within the parameter search space and in the generation of new parameters around these areas.<n>The obtained results are statistically analyzed and compared with state-of-the-art algorithms, including advanced auto-tuning approaches.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The concept of parameter setting is a crucial and significant process in metaheuristics since it can majorly impact their performance. It is a highly complex and challenging procedure since it requires a deep understanding of the optimization algorithm and the optimization problem at hand. In recent years, the upcoming rise of autonomous decision systems has attracted ongoing scientific interest in this direction, utilizing a considerable number of parameter-tuning methods. There are two types of methods: offline and online. Online methods usually excel in complex real-world problems, as they can offer dynamic parameter control throughout the execution of the algorithm. The present work proposes a general-purpose online parameter-tuning method called Cluster-Based Parameter Adaptation (CPA) for population-based metaheuristics. The main idea lies in the identification of promising areas within the parameter search space and in the generation of new parameters around these areas. The method's validity has been demonstrated using the differential evolution algorithm and verified in established test suites of low- and high-dimensional problems. The obtained results are statistically analyzed and compared with state-of-the-art algorithms, including advanced auto-tuning approaches. The analysis reveals the promising solid CPA's performance as well as its robustness under a variety of benchmark problems and dimensions.
Related papers
- Generalized Tensor-based Parameter-Efficient Fine-Tuning via Lie Group Transformations [50.010924231754856]
Adapting pre-trained foundation models for diverse downstream tasks is a core practice in artificial intelligence.<n>To overcome this, parameter-efficient fine-tuning (PEFT) methods like LoRA have emerged and are becoming a growing research focus.<n>We propose a generalization that extends matrix-based PEFT methods to higher-dimensional parameter spaces without compromising their structural properties.
arXiv Detail & Related papers (2025-04-01T14:36:45Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Ensemble Kalman Filtering Meets Gaussian Process SSM for Non-Mean-Field and Online Inference [47.460898983429374]
We introduce an ensemble Kalman filter (EnKF) into the non-mean-field (NMF) variational inference framework to approximate the posterior distribution of the latent states.
This novel marriage between EnKF and GPSSM not only eliminates the need for extensive parameterization in learning variational distributions, but also enables an interpretable, closed-form approximation of the evidence lower bound (ELBO)
We demonstrate that the resulting EnKF-aided online algorithm embodies a principled objective function by ensuring data-fitting accuracy while incorporating model regularizations to mitigate overfitting.
arXiv Detail & Related papers (2023-12-10T15:22:30Z) - Hyperparameter Adaptive Search for Surrogate Optimization: A
Self-Adjusting Approach [1.6317061277457001]
Surrogate optimization (SO) algorithms have shown promise for optimizing expensive black-box functions.
Our approach identifies and modifies the most influential hyper parameters specific to each problem and SO approach.
Experimental results demonstrate the effectiveness of HASSO in enhancing the performance of various SO algorithms.
arXiv Detail & Related papers (2023-10-12T01:26:05Z) - On the Effectiveness of Parameter-Efficient Fine-Tuning [79.6302606855302]
Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters shared across different tasks.
We show that all of the methods are actually sparse fine-tuned models and conduct a novel theoretical analysis of them.
Despite the effectiveness of sparsity grounded by our theory, it still remains an open problem of how to choose the tunable parameters.
arXiv Detail & Related papers (2022-11-28T17:41:48Z) - Theory-inspired Parameter Control Benchmarks for Dynamic Algorithm
Configuration [32.055812915031666]
We show how to compute optimal parameter portfolios of a given size.
We extend this benchmark by analyzing optimal control policies that can select the parameters only from a given portfolio of possible values.
We demonstrate the usefulness of our benchmarks by analyzing the behavior of the DDQN reinforcement learning approach for dynamic algorithm configuration.
arXiv Detail & Related papers (2022-02-07T15:00:30Z) - Parameter Tuning Strategies for Metaheuristic Methods Applied to
Discrete Optimization of Structural Design [0.0]
This paper presents several strategies to tune the parameters of metaheuristic methods for (discrete) design optimization of reinforced concrete (RC) structures.
A novel utility metric is proposed, based on the area under the average performance curve.
arXiv Detail & Related papers (2021-10-12T17:34:39Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Online hyperparameter optimization by real-time recurrent learning [57.01871583756586]
Our framework takes advantage of the analogy between hyperparameter optimization and parameter learning in neural networks (RNNs)
It adapts a well-studied family of online learning algorithms for RNNs to tune hyperparameters and network parameters simultaneously.
This procedure yields systematically better generalization performance compared to standard methods, at a fraction of wallclock time.
arXiv Detail & Related papers (2021-02-15T19:36:18Z) - Learning adaptive differential evolution algorithm from optimization
experiences by policy gradient [24.2122434523704]
This paper proposes a novel adaptive parameter control approach based on learning from the optimization experiences over a set of problems.
A reinforcement learning algorithm, named policy, is applied to learn an agent that can provide the control parameters of a proposed differential evolution adaptively.
The proposed algorithm performs competitively against nine well-known evolutionary algorithms on the CEC'13 and CEC'17 test suites.
arXiv Detail & Related papers (2021-02-06T12:01:20Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Online Parameter Estimation for Safety-Critical Systems with Gaussian
Processes [6.122161391301866]
We present a Bayesian optimization framework based on Gaussian processes (GPs) for online parameter estimation.
It uses an efficient search strategy over a response surface in the parameter space for finding the global optima with minimal function evaluations.
We demonstrate our technique on an actuated planar pendulum and safety-critical quadrotor in simulation with changing parameters.
arXiv Detail & Related papers (2020-02-18T20:38:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.