MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization
- URL: http://arxiv.org/abs/2410.03684v1
- Date: Fri, 20 Sep 2024 14:52:25 GMT
- Title: MRSO: Balancing Exploration and Exploitation through Modified Rat Swarm Optimization for Global Optimization
- Authors: Hemin Sardar Abdulla, Azad A. Ameen, Sarwar Ibrahim Saeed, Ismail Asaad Mohammed, Tarik A. Rashid,
- Abstract summary: This study introduces Modified Rat Swarm (MRSO) to enhance the balance between exploration and exploitation.
MRSO incorporates unique modifications to improve search efficiency and durability, making it suitable for challenging engineering problems such as welded beam, pressure vessel, and gear train design.
In the CEC 2019 benchmarks, MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities.
- Score: 3.7503163440313463
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid advancement of intelligent technology has led to the development of optimization algorithms that leverage natural behaviors to address complex issues. Among these, the Rat Swarm Optimizer (RSO), inspired by rats' social and behavioral characteristics, has demonstrated potential in various domains, although its convergence precision and exploration capabilities are limited. To address these shortcomings, this study introduces the Modified Rat Swarm Optimizer (MRSO), designed to enhance the balance between exploration and exploitation. MRSO incorporates unique modifications to improve search efficiency and durability, making it suitable for challenging engineering problems such as welded beam, pressure vessel, and gear train design. Extensive testing with classical benchmark functions shows that MRSO significantly improves performance, avoiding local optima and achieving higher accuracy in six out of nine multimodal functions and in all seven fixed-dimension multimodal functions. In the CEC 2019 benchmarks, MRSO outperforms the standard RSO in six out of ten functions, demonstrating superior global search capabilities. When applied to engineering design problems, MRSO consistently delivers better average results than RSO, proving its effectiveness. Additionally, we compared our approach with eight recent and well-known algorithms using both classical and CEC-2019 bench-marks. MRSO outperforms each of these algorithms, achieving superior results in six out of 23 classical benchmark functions and in four out of ten CEC-2019 benchmark functions. These results further demonstrate MRSO's significant contributions as a reliable and efficient tool for optimization tasks in engineering applications.
Related papers
- EGSS: Entropy-guided Stepwise Scaling for Reliable Software Engineering [14.718324012970944]
Agentic Test-Time Scaling (TTS) has delivered state-of-the-art (SOTA) performance on complex software engineering tasks such as code generation and bug fixing.<n>We propose Entropy-Guided Stepwise Scaling (EGSS), a novel TTS framework that balances efficiency and effectiveness through entropy-guided adaptive search and robust test-suite augmentation.
arXiv Detail & Related papers (2026-02-05T03:02:54Z) - A multi-strategy improved gazelle optimization algorithm for solving numerical optimization and engineering applications [0.09320657506524145]
This paper proposes a multi-strategy improved gazelle optimization algorithm (MSIGOA)<n>Two adaptive parameter tuning strategies improve the applicability of the algorithm and promote a smoother optimization process.<n>The dominant population-based restart strategy enhances the algorithms ability to escape from local optima and avoid its premature convergence.
arXiv Detail & Related papers (2025-09-08T20:44:15Z) - A RankNet-Inspired Surrogate-Assisted Hybrid Metaheuristic for Expensive Coverage Optimization [5.757318591302855]
We propose a RankNet-Inspired Surrogate-assisted Hybrid Metaheuristic to handle large-scale coverage optimization tasks.
Our algorithm consistently outperforms state-of-the-art algorithms for EMVOPs.
arXiv Detail & Related papers (2025-01-13T14:49:05Z) - LLaMA-Berry: Pairwise Optimization for O1-like Olympiad-Level Mathematical Reasoning [56.273799410256075]
The framework combines Monte Carlo Tree Search (MCTS) with iterative Self-Refine to optimize the reasoning path.
The framework has been tested on general and advanced benchmarks, showing superior performance in terms of search efficiency and problem-solving capability.
arXiv Detail & Related papers (2024-10-03T18:12:29Z) - Combining Automated Optimisation of Hyperparameters and Reward Shape [7.407166175374958]
We propose a methodology for the combined optimisation of hyperparameters and the reward function.
We conducted extensive experiments using Proximal Policy optimisation and Soft Actor-Critic.
Our results show that combined optimisation significantly improves over baseline performance in half of the environments and achieves competitive performance in the others.
arXiv Detail & Related papers (2024-06-26T12:23:54Z) - Leveraging Large Language Model to Generate a Novel Metaheuristic Algorithm with CRISPE Framework [14.109964882720249]
We borrow the large language model (LLM) ChatGPT-3.5 to automatically and quickly design a new metaheuristic algorithm (MA) with only a small amount of input.
The novel animal-inspired MA named zoological search optimization (ZSO) draws inspiration from the collective behaviors of animals for solving continuous optimization problems.
In numerical experiments, we investigate the performance of ZSO-acted algorithms on CEC2014 benchmark functions, CEC2022 benchmark functions, and six engineering optimization problems.
arXiv Detail & Related papers (2024-03-25T04:34:20Z) - Hyperparameter Adaptive Search for Surrogate Optimization: A
Self-Adjusting Approach [1.6317061277457001]
Surrogate optimization (SO) algorithms have shown promise for optimizing expensive black-box functions.
Our approach identifies and modifies the most influential hyper parameters specific to each problem and SO approach.
Experimental results demonstrate the effectiveness of HASSO in enhancing the performance of various SO algorithms.
arXiv Detail & Related papers (2023-10-12T01:26:05Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Learning Performance-Improving Code Edits [107.21538852090208]
We introduce a framework for adapting large language models (LLMs) to high-level program optimization.
First, we curate a dataset of performance-improving edits made by human programmers of over 77,000 competitive C++ programming submission pairs.
For prompting, we propose retrieval-based few-shot prompting and chain-of-thought, and for finetuning, these include performance-conditioned generation and synthetic data augmentation based on self-play.
arXiv Detail & Related papers (2023-02-15T18:59:21Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - Evolving Pareto-Optimal Actor-Critic Algorithms for Generalizability and
Stability [67.8426046908398]
Generalizability and stability are two key objectives for operating reinforcement learning (RL) agents in the real world.
This paper presents MetaPG, an evolutionary method for automated design of actor-critic loss functions.
arXiv Detail & Related papers (2022-04-08T20:46:16Z) - RSO: A Novel Reinforced Swarm Optimization Algorithm for Feature
Selection [0.0]
In this paper, we propose a novel feature selection algorithm named Reinforced Swarm Optimization (RSO)
This algorithm embeds the widely used Bee Swarm Optimization (BSO) algorithm along with Reinforcement Learning (RL) to maximize the reward of a superior search agent and punish the inferior ones.
The proposed method is evaluated on 25 widely known UCI datasets containing a perfect blend of balanced and imbalanced data.
arXiv Detail & Related papers (2021-07-29T17:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.