Balancing exploration and exploitation phases in whale optimization
algorithm: an insightful and empirical analysis
- URL: http://arxiv.org/abs/2310.12155v1
- Date: Sun, 3 Sep 2023 19:15:34 GMT
- Title: Balancing exploration and exploitation phases in whale optimization
algorithm: an insightful and empirical analysis
- Authors: Aram M. Ahmed, Tarik A. Rashid, Bryar A. Hassan, Jaffer Majidpour,
Kaniaw A. Noori, Chnoor Maheadeen Rahman, Mohmad Hussein Abdalla, Shko M.
Qader, Noor Tayfor, Naufel B Mohammed
- Abstract summary: Whale optimization algorithm as a robust and well recognized metaheuristic algorithm in the literature, has proposed a novel scheme to achieve this balance.
This chapter attempts to empirically analyze the WOA algorithm in terms of the local and global search capabilities.
- Score: 4.0814527055582746
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Agents of any metaheuristic algorithms are moving in two modes, namely
exploration and exploitation. Obtaining robust results in any algorithm is
strongly dependent on how to balance between these two modes. Whale
optimization algorithm as a robust and well recognized metaheuristic algorithm
in the literature, has proposed a novel scheme to achieve this balance. It has
also shown superior results on a wide range of applications. Moreover, in the
previous chapter, an equitable and fair performance evaluation of the algorithm
was provided. However, to this point, only comparison of the final results is
considered, which does not explain how these results are obtained. Therefore,
this chapter attempts to empirically analyze the WOA algorithm in terms of the
local and global search capabilities i.e. the ratio of exploration and
exploitation phases. To achieve this objective, the dimension-wise diversity
measurement is employed, which, at various stages of the optimization process,
statistically evaluates the population's convergence and diversity.
Related papers
- Absolute Ranking: An Essential Normalization for Benchmarking Optimization Algorithms [0.0]
evaluating performance across optimization algorithms on many problems presents a complex challenge due to the diversity of numerical scales involved.
This paper extensively explores the problem, making a compelling case to underscore the issue and conducting a thorough analysis of its root causes.
Building on this research, this paper introduces a new mathematical model called "absolute ranking" and a sampling-based computational method.
arXiv Detail & Related papers (2024-09-06T00:55:03Z) - Evaluating Ensemble Methods for News Recommender Systems [50.90330146667386]
This paper demonstrates how ensemble methods can be used to combine many diverse state-of-the-art algorithms to achieve superior results on the Microsoft News dataset (MIND)
Our findings demonstrate that a combination of NRS algorithms can outperform individual algorithms, provided that the base learners are sufficiently diverse.
arXiv Detail & Related papers (2024-06-23T13:40:50Z) - Model Uncertainty in Evolutionary Optimization and Bayesian Optimization: A Comparative Analysis [5.6787965501364335]
Black-box optimization problems are common in many real-world applications.
These problems require optimization through input-output interactions without access to internal workings.
Two widely used gradient-free optimization techniques are employed to address such challenges.
This paper aims to elucidate the similarities and differences in the utilization of model uncertainty between these two methods.
arXiv Detail & Related papers (2024-03-21T13:59:19Z) - Representation Learning with Multi-Step Inverse Kinematics: An Efficient
and Optimal Approach to Rich-Observation RL [106.82295532402335]
Existing reinforcement learning algorithms suffer from computational intractability, strong statistical assumptions, and suboptimal sample complexity.
We provide the first computationally efficient algorithm that attains rate-optimal sample complexity with respect to the desired accuracy level.
Our algorithm, MusIK, combines systematic exploration with representation learning based on multi-step inverse kinematics.
arXiv Detail & Related papers (2023-04-12T14:51:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Empirical Evaluation of Project Scheduling Algorithms for Maximization
of the Net Present Value [0.0]
This paper presents an empirical performance analysis of three project scheduling algorithms.
The selected algorithms are: Recursive Search (RS), Steepest Ascent Approach (SAA) and Hybrid Search (HS)
arXiv Detail & Related papers (2022-07-05T03:01:33Z) - On the Convergence of Distributed Stochastic Bilevel Optimization
Algorithms over a Network [55.56019538079826]
Bilevel optimization has been applied to a wide variety of machine learning models.
Most existing algorithms restrict their single-machine setting so that they are incapable of handling distributed data.
We develop novel decentralized bilevel optimization algorithms based on a gradient tracking communication mechanism and two different gradients.
arXiv Detail & Related papers (2022-06-30T05:29:52Z) - Balancing Exploration and Exploitation for Solving Large-scale
Multiobjective Optimization via Attention Mechanism [18.852491892952514]
We propose a large-scale multiobjective optimization algorithm based on the attention mechanism, called (LMOAM)
The attention mechanism will assign a unique weight to each decision variable, and LMOAM will use this weight to strike a balance between exploration and exploitation from the decision variable level.
arXiv Detail & Related papers (2022-05-20T09:45:49Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.