ORS: A novel Olive Ridley Survival inspired Meta-heuristic Optimization Algorithm
- URL: http://arxiv.org/abs/2409.09210v2
- Date: Sat, 2 Nov 2024 08:10:43 GMT
- Title: ORS: A novel Olive Ridley Survival inspired Meta-heuristic Optimization Algorithm
- Authors: Niranjan Panigrahi, Sourav Kumar Bhoi, Debasis Mohapatra, Rashmi Ranjan Sahoo, Kshira Sagar Sahoo, Anil Mohapatra,
- Abstract summary: Olive Ridley Survival (ORS) is proposed which is inspired from survival challenges faced by hatchlings of Olive Ridley sea turtle.
ORS has two major phases: hatchlings survival through environmental factors and impact of movement trajectory on its survival.
To validate the algorithm, fourteen mathematical benchmark functions from standard CEC test suites are evaluated and statistically tested.
- Score: 2.4343652794054487
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meta-heuristic algorithmic development has been a thrust area of research since its inception. In this paper, a novel meta-heuristic optimization algorithm, Olive Ridley Survival (ORS), is proposed which is inspired from survival challenges faced by hatchlings of Olive Ridley sea turtle. A major fact about survival of Olive Ridley reveals that out of one thousand Olive Ridley hatchlings which emerge from nest, only one survive at sea due to various environmental and other factors. This fact acts as the backbone for developing the proposed algorithm. The algorithm has two major phases: hatchlings survival through environmental factors and impact of movement trajectory on its survival. The phases are mathematically modelled and implemented along with suitable input representation and fitness function. The algorithm is analysed theoretically. To validate the algorithm, fourteen mathematical benchmark functions from standard CEC test suites are evaluated and statistically tested. Also, to study the efficacy of ORS on recent complex benchmark functions, ten benchmark functions of CEC-06-2019 are evaluated. Further, three well-known engineering problems are solved by ORS and compared with other state-of-the-art meta-heuristics. Simulation results show that in many cases, the proposed ORS algorithm outperforms some state-of-the-art meta-heuristic optimization algorithms. The sub-optimal behavior of ORS in some recent benchmark functions is also observed.
Related papers
- SHS: Scorpion Hunting Strategy Swarm Algorithm [2.9863148950750737]
We introduce the Scorpion Hunting Strategy (SHS), a novel population-based, nature-inspired optimisation algorithm.
This work substantiates its effectiveness and versatility through rigorous benchmarking and real-world problem-solving scenarios.
arXiv Detail & Related papers (2024-07-19T10:58:42Z) - In Search of Excellence: SHOA as a Competitive Shrike Optimization Algorithm for Multimodal Problems [4.939986309170004]
The main inspiration for the proposed algorithm is taken from the swarming behaviours of shrike birds in nature.
Two parts of optimization exploration and exploitation are designed by modelling shrike breeding and searching for foods.
This paper is a mathematical model for the SHOA to perform optimization.
arXiv Detail & Related papers (2024-07-09T11:19:37Z) - Snail Homing and Mating Search Algorithm: A Novel Bio-Inspired
Metaheuristic Algorithm [0.855200588098612]
The proposed SHMS algorithm is investigated by solving several unimodal and multimodal functions.
The real-world application of SHMS algorithm is successfully demonstrated in the engineering design domain.
arXiv Detail & Related papers (2023-10-06T05:18:48Z) - GOOSE Algorithm: A Powerful Optimization Tool for Real-World Engineering Challenges and Beyond [4.939986309170004]
The GOOSE algorithm is benchmarked on 19 well-known test functions.
The proposed algorithm is tested on 10 modern benchmark functions.
The achieved findings attest to the proposed algorithm's superior performance.
arXiv Detail & Related papers (2023-07-19T19:14:25Z) - Maximum-Likelihood Inverse Reinforcement Learning with Finite-Time
Guarantees [56.848265937921354]
Inverse reinforcement learning (IRL) aims to recover the reward function and the associated optimal policy.
Many algorithms for IRL have an inherently nested structure.
We develop a novel single-loop algorithm for IRL that does not compromise reward estimation accuracy.
arXiv Detail & Related papers (2022-10-04T17:13:45Z) - Algorithmic Foundations of Empirical X-risk Minimization [51.58884973792057]
This manuscript introduces a new optimization framework machine learning and AI, named bf empirical X-risk baseline (EXM).
X-risk is a term introduced to represent a family of compositional measures or objectives.
arXiv Detail & Related papers (2022-06-01T12:22:56Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - A Simple Evolutionary Algorithm for Multi-modal Multi-objective
Optimization [0.0]
We introduce a steady-state evolutionary algorithm for solving multi-modal, multi-objective optimization problems (MMOPs)
We report its performance on 21 MMOPs from various test suites that are widely used for benchmarking using a low computational budget of 1000 function evaluations.
arXiv Detail & Related papers (2022-01-18T03:31:11Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - Improving RNA Secondary Structure Design using Deep Reinforcement
Learning [69.63971634605797]
We propose a new benchmark of applying reinforcement learning to RNA sequence design, in which the objective function is defined to be the free energy in the sequence's secondary structure.
We show results of the ablation analysis that we do for these algorithms, as well as graphs indicating the algorithm's performance across batches.
arXiv Detail & Related papers (2021-11-05T02:54:06Z) - Reparameterized Variational Divergence Minimization for Stable Imitation [57.06909373038396]
We study the extent to which variations in the choice of probabilistic divergence may yield more performant ILO algorithms.
We contribute a re parameterization trick for adversarial imitation learning to alleviate the challenges of the promising $f$-divergence minimization framework.
Empirically, we demonstrate that our design choices allow for ILO algorithms that outperform baseline approaches and more closely match expert performance in low-dimensional continuous-control tasks.
arXiv Detail & Related papers (2020-06-18T19:04:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.