SHS: Scorpion Hunting Strategy Swarm Algorithm
- URL: http://arxiv.org/abs/2407.14202v2
- Date: Fri, 30 Aug 2024 06:15:59 GMT
- Title: SHS: Scorpion Hunting Strategy Swarm Algorithm
- Authors: Abhilash Singh, Seyed Muhammad Hossein Mousavi, Kumar Gaurav,
- Abstract summary: We introduce the Scorpion Hunting Strategy (SHS), a novel population-based, nature-inspired optimisation algorithm.
This work substantiates its effectiveness and versatility through rigorous benchmarking and real-world problem-solving scenarios.
- Score: 2.9863148950750737
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We introduced the Scorpion Hunting Strategy (SHS), a novel population-based, nature-inspired optimisation algorithm. This algorithm draws inspiration from the hunting strategy of scorpions, which identify, locate, and capture their prey using the alpha and beta vibration operators. These operators control the SHS algorithm's exploitation and exploration abilities. To formulate an optimisation method, we mathematically simulate these dynamic events and behaviors. We evaluate the effectiveness of the SHS algorithm by employing 20 benchmark functions (including 10 conventional and 10 CEC2020 functions), using both qualitative and quantitative analyses. Through a comparative analysis with 12 state-of-the-art meta-heuristic algorithms, we demonstrate that the proposed SHS algorithm yields exceptionally promising results. These findings are further supported by statistically significant results obtained through the Wilcoxon rank sum test. Additionally, the ranking of SHS, as determined by the average rank derived from the Friedman test, positions it at the forefront when compared to other algorithms. Going beyond theoretical validation, we showcase the practical utility of the SHS algorithm by applying it to six distinct real-world optimisation tasks. These applications illustrate the algorithm's potential in addressing complex optimisation challenges. In summary, this work not only introduces the innovative SHS algorithm but also substantiates its effectiveness and versatility through rigorous benchmarking and real-world problem-solving scenarios.
Related papers
- Balancing exploration and exploitation phases in whale optimization
algorithm: an insightful and empirical analysis [4.0814527055582746]
Whale optimization algorithm as a robust and well recognized metaheuristic algorithm in the literature, has proposed a novel scheme to achieve this balance.
This chapter attempts to empirically analyze the WOA algorithm in terms of the local and global search capabilities.
arXiv Detail & Related papers (2023-09-03T19:15:34Z) - GOOSE Algorithm: A Powerful Optimization Tool for Real-World Engineering Challenges and Beyond [4.939986309170004]
The GOOSE algorithm is benchmarked on 19 well-known test functions.
The proposed algorithm is tested on 10 modern benchmark functions.
The achieved findings attest to the proposed algorithm's superior performance.
arXiv Detail & Related papers (2023-07-19T19:14:25Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Duck swarm algorithm: theory, numerical optimization, and applications [6.244015536594532]
A swarm intelligence-based optimization algorithm, named Duck Swarm Algorithm (DSA), is proposed in this study.
Two rules are modeled from the finding food and foraging of the duck, which corresponds to the exploration and exploitation phases of the proposed DSA.
Results show that DSA is a high-performance optimization method in terms of convergence speed and exploration-exploitation balance.
arXiv Detail & Related papers (2021-12-27T04:53:36Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Learning to Hash Robustly, with Guarantees [79.68057056103014]
In this paper, we design an NNS algorithm for the Hamming space that has worst-case guarantees essentially matching that of theoretical algorithms.
We evaluate the algorithm's ability to optimize for a given dataset both theoretically and practically.
Our algorithm has a 1.8x and 2.1x better recall on the worst-performing queries to the MNIST and ImageNet datasets.
arXiv Detail & Related papers (2021-08-11T20:21:30Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Dynamic Cat Swarm Optimization Algorithm for Backboard Wiring Problem [0.9990687944474739]
This paper presents a powerful swarm intelligence meta-heuristic optimization algorithm called Dynamic Cat Swarm Optimization.
The proposed algorithm suggests a new method to provide a proper balance between these phases by modifying the selection scheme and the seeking mode of the algorithm.
optimization results show the effectiveness of the proposed algorithm, which ranks first compared to several well-known algorithms available in the literature.
arXiv Detail & Related papers (2021-04-27T19:41:27Z) - An Efficient Algorithm for Deep Stochastic Contextual Bandits [10.298368632706817]
In contextual bandit problems, an agent selects an action based on certain observed context to maximize the reward over iterations.
Recently there have been a few studies using a deep neural network (DNN) to predict the expected reward for an action, and is trained by a gradient based method.
arXiv Detail & Related papers (2021-04-12T16:34:43Z) - Active Model Estimation in Markov Decision Processes [108.46146218973189]
We study the problem of efficient exploration in order to learn an accurate model of an environment, modeled as a Markov decision process (MDP)
We show that our Markov-based algorithm outperforms both our original algorithm and the maximum entropy algorithm in the small sample regime.
arXiv Detail & Related papers (2020-03-06T16:17:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.