Battle royale optimizer with a new movement strategy
- URL: http://arxiv.org/abs/2203.09889v1
- Date: Wed, 19 Jan 2022 16:36:13 GMT
- Title: Battle royale optimizer with a new movement strategy
- Authors: Sara Akan, Taymaz Akan
- Abstract summary: This paper proposes a modified BRO (M-BRO) in order to improve balance between exploration and exploitation.
The complexity of this modified algorithm is the same as the original one.
The results show that BRO with additional movement operator performs well to solve complex numerical optimization problems.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gamed-based is a new stochastic metaheuristics optimization category that is
inspired by traditional or digital game genres. Unlike SI-based algorithms,
in-dividuals do not work together with the goal of defeating other individuals
and winning the game. Battle royale optimizer (BRO) is a Gamed-based
me-taheuristic optimization algorithm that has been recently proposed for the
task of continuous problems. This paper proposes a modified BRO (M-BRO) in
order to improve balance between exploration and exploitation. For this matter,
an additional movement operator has been used in the movement strategy.
Moreover, no extra parameters are required for the proposed ap-proach.
Furthermore, the complexity of this modified algorithm is the same as the
original one. Experiments are performed on a set of 19 (unimodal and
multimodal) benchmark functions (CEC 2010). The proposed method has been
compared with the original BRO alongside six well-known/recently proposed
optimization algorithms. The results show that BRO with additional movement
operator performs well to solve complex numerical optimization problems
compared to the original BRO and other competitors.
Related papers
- BMR and BWR: Two simple metaphor-free optimization algorithms for solving real-life non-convex constrained and unconstrained problems [0.5755004576310334]
Two simple yet powerful optimization algorithms, named the Best-MeanRandom (BMR) and Best-Worst-Random (BWR) algorithms, are developed and presented in this paper.
arXiv Detail & Related papers (2024-07-15T18:11:47Z) - Efficient Multiplayer Battle Game Optimizer for Adversarial Robust Neural Architecture Search [14.109964882720249]
This paper introduces a novel metaheuristic algorithm, known as the efficient multiplayer battle game (EMBGO)
The motivation behind this research stems from the need to rectify identified shortcomings in the original MBGO.
EMBGO mitigates these limitations by integrating the movement and battle phases to simplify the original optimization framework and improve search efficiency.
arXiv Detail & Related papers (2024-03-15T08:45:32Z) - Bidirectional Looking with A Novel Double Exponential Moving Average to
Adaptive and Non-adaptive Momentum Optimizers [109.52244418498974]
We propose a novel textscAdmeta (textbfADouble exponential textbfMov averagtextbfE textbfAdaptive and non-adaptive momentum) framework.
We provide two implementations, textscAdmetaR and textscAdmetaS, the former based on RAdam and the latter based on SGDM.
arXiv Detail & Related papers (2023-07-02T18:16:06Z) - Optimal algorithms for group distributionally robust optimization and
beyond [48.693477387133484]
We devise algorithms for a class of DRO problems including group DRO, subpopulation fairness, and empirical conditional value at risk.
Our new algorithms achieve faster convergence rates than existing algorithms for multiple DRO settings.
Empirically, too, our algorithms outperform known methods.
arXiv Detail & Related papers (2022-12-28T02:45:46Z) - LAB: A Leader-Advocate-Believer Based Optimization Algorithm [9.525324619018983]
This manuscript introduces a new socio-inspired metaheuristic technique referred to as Leader-Advocate-Believer based optimization algorithm (LAB)
The proposed algorithm is inspired by the AI-based competitive behaviour exhibited by the individuals in a group while simultaneously improving themselves and establishing a role (Leader, Advocate, Believer)
arXiv Detail & Related papers (2022-04-23T10:58:58Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Minimax Optimization with Smooth Algorithmic Adversaries [59.47122537182611]
We propose a new algorithm for the min-player against smooth algorithms deployed by an adversary.
Our algorithm is guaranteed to make monotonic progress having no limit cycles, and to find an appropriate number of gradient ascents.
arXiv Detail & Related papers (2021-06-02T22:03:36Z) - Portfolio Search and Optimization for General Strategy Game-Playing [58.896302717975445]
We propose a new algorithm for optimization and action-selection based on the Rolling Horizon Evolutionary Algorithm.
For the optimization of the agents' parameters and portfolio sets we study the use of the N-tuple Bandit Evolutionary Algorithm.
An analysis of the agents' performance shows that the proposed algorithm generalizes well to all game-modes and is able to outperform other portfolio methods.
arXiv Detail & Related papers (2021-04-21T09:28:28Z) - Efficient Pure Exploration for Combinatorial Bandits with Semi-Bandit
Feedback [51.21673420940346]
Combinatorial bandits generalize multi-armed bandits, where the agent chooses sets of arms and observes a noisy reward for each arm contained in the chosen set.
We focus on the pure-exploration problem of identifying the best arm with fixed confidence, as well as a more general setting, where the structure of the answer set differs from the one of the action set.
Based on a projection-free online learning algorithm for finite polytopes, it is the first computationally efficient algorithm which is convexally optimal and has competitive empirical performance.
arXiv Detail & Related papers (2021-01-21T10:35:09Z) - Batch Sequential Adaptive Designs for Global Optimization [5.825138898746968]
Efficient global optimization (EGO) is one of the most popular SAD methods for expensive black-box optimization problems.
For those multiple points EGO methods, the heavy computation and points clustering are the obstacles.
In this work, a novel batch SAD method, named "accelerated EGO", is forwarded by using a refined sampling/importance resampling (SIR) method.
The efficiency of the proposed SAD is validated by nine classic test functions with dimension from 2 to 12.
arXiv Detail & Related papers (2020-10-21T01:11:35Z) - Towards Dynamic Algorithm Selection for Numerical Black-Box
Optimization: Investigating BBOB as a Use Case [4.33419118449588]
We show that even single-switch dynamic Algorithm selection (dynAS) can potentially result in significant performance gains.
We also discuss key challenges in dynAS, and argue that the BBOB-framework can become a useful tool in overcoming these.
arXiv Detail & Related papers (2020-06-11T16:36:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.