Review of Parameter Tuning Methods for Nature-Inspired Algorithms
- URL: http://arxiv.org/abs/2308.15965v1
- Date: Wed, 30 Aug 2023 11:41:39 GMT
- Title: Review of Parameter Tuning Methods for Nature-Inspired Algorithms
- Authors: Geethu Joy, Christian Huyck, Xin-She Yang
- Abstract summary: This chapter reviews some of the main methods for parameter tuning and highlights the important issues concerning the latest development in parameter tuning.
A few open problems are also discussed with some recommendations for future research.
- Score: 1.4042211166197214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Almost all optimization algorithms have algorithm-dependent parameters, and
the setting of such parameter values can largely influence the behaviour of the
algorithm under consideration. Thus, proper parameter tuning should be carried
out to ensure the algorithm used for optimization may perform well and can be
sufficiently robust for solving different types of optimization problems. This
chapter reviews some of the main methods for parameter tuning and then
highlights the important issues concerning the latest development in parameter
tuning. A few open problems are also discussed with some recommendations for
future research.
Related papers
- Learning Joint Models of Prediction and Optimization [56.04498536842065]
Predict-Then-Then framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by joint predictive models.
arXiv Detail & Related papers (2024-09-07T19:52:14Z) - End-to-End Learning for Fair Multiobjective Optimization Under
Uncertainty [55.04219793298687]
The Predict-Then-Forecast (PtO) paradigm in machine learning aims to maximize downstream decision quality.
This paper extends the PtO methodology to optimization problems with nondifferentiable Ordered Weighted Averaging (OWA) objectives.
It shows how optimization of OWA functions can be effectively integrated with parametric prediction for fair and robust optimization under uncertainty.
arXiv Detail & Related papers (2024-02-12T16:33:35Z) - Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and
Optimization [59.386153202037086]
Predict-Then- framework uses machine learning models to predict unknown parameters of an optimization problem from features before solving.
This approach can be inefficient and requires handcrafted, problem-specific rules for backpropagation through the optimization step.
This paper proposes an alternative method, in which optimal solutions are learned directly from the observable features by predictive models.
arXiv Detail & Related papers (2023-11-22T01:32:06Z) - Optimizing Large-Scale Hyperparameters via Automated Learning Algorithm [97.66038345864095]
We propose a new hyperparameter optimization method with zeroth-order hyper-gradients (HOZOG)
Specifically, we first formulate hyperparameter optimization as an A-based constrained optimization problem.
Then, we use the average zeroth-order hyper-gradients to update hyper parameters.
arXiv Detail & Related papers (2021-02-17T21:03:05Z) - Particle Swarm Optimization: Fundamental Study and its Application to
Optimization and to Jetty Scheduling Problems [0.0]
The advantages of evolutionary algorithms with respect to traditional methods have been greatly discussed in the literature.
While particle swarms share such advantages, they outperform evolutionary algorithms in that they require lower computational cost and easier implementation.
This paper does not intend to study their tuning, general-purpose settings are taken from previous studies, and virtually the same algorithm is used to optimize a variety of notably different problems.
arXiv Detail & Related papers (2021-01-25T02:06:30Z) - Optimizing Optimizers: Regret-optimal gradient descent algorithms [9.89901717499058]
We study the existence, uniqueness and consistency of regret-optimal algorithms.
By providing first-order optimality conditions for the control problem, we show that regret-optimal algorithms must satisfy a specific structure in their dynamics.
We present fast numerical methods for approximating them, generating optimization algorithms which directly optimize their long-term regret.
arXiv Detail & Related papers (2020-12-31T19:13:53Z) - Fast Perturbative Algorithm Configurators [0.0]
We prove a linear lower bound on the expected time to optimise any parameter problem for ParamRLS and ParamILS.
We propose a harmonic mutation operator for perturbative algorithms in polylogarithmic time for unimodal and approximately unimodal.
An experimental analysis confirms the superiority of the approach in practice for a number of configuration scenarios.
arXiv Detail & Related papers (2020-07-07T10:48:32Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - MATE: A Model-based Algorithm Tuning Engine [2.4693304175649304]
We introduce a Model-based Algorithm Turning Engine, namely MATE, where the parameters of an algorithm are represented as expressions of the features of a target optimisation problem.
We formulate the problem of finding the relationships between the parameters and the problem features as a symbolic regression problem and we use genetic programming to extract these expressions.
For the evaluation, we apply our approach to configuration of the (1+1) EA and RLS algorithms for the OneMax, LeadingOnes, BinValue and Jump optimisation problems.
arXiv Detail & Related papers (2020-04-27T12:50:48Z) - On Hyper-parameter Tuning for Stochastic Optimization Algorithms [28.88646928299302]
This paper proposes the first-ever algorithmic framework for tuning hyper-parameters of optimization algorithm based on reinforcement learning.
The proposed framework can be used as a standard tool for hyper-parameter tuning in algorithms.
arXiv Detail & Related papers (2020-03-04T12:29:12Z) - Proximal Gradient Algorithm with Momentum and Flexible Parameter Restart
for Nonconvex Optimization [73.38702974136102]
Various types of parameter restart schemes have been proposed for accelerated algorithms to facilitate their practical convergence in rates.
In this paper, we propose an algorithm for solving nonsmooth problems.
arXiv Detail & Related papers (2020-02-26T16:06:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.