LAB: A Leader-Advocate-Believer Based Optimization Algorithm
- URL: http://arxiv.org/abs/2204.11049v1
- Date: Sat, 23 Apr 2022 10:58:58 GMT
- Title: LAB: A Leader-Advocate-Believer Based Optimization Algorithm
- Authors: Ruturaj Reddy, Anand J Kulkarni, Ganesh Krishnasamy, Apoorva S
Shastri, Amir H. Gandomi
- Abstract summary: This manuscript introduces a new socio-inspired metaheuristic technique referred to as Leader-Advocate-Believer based optimization algorithm (LAB)
The proposed algorithm is inspired by the AI-based competitive behaviour exhibited by the individuals in a group while simultaneously improving themselves and establishing a role (Leader, Advocate, Believer)
- Score: 9.525324619018983
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This manuscript introduces a new socio-inspired metaheuristic technique
referred to as Leader-Advocate-Believer based optimization algorithm (LAB) for
engineering and global optimization problems. The proposed algorithm is
inspired by the AI-based competitive behaviour exhibited by the individuals in
a group while simultaneously improving themselves and establishing a role
(Leader, Advocate, Believer). LAB performance in computational time and
function evaluations are benchmarked using other metaheuristic algorithms.
Besides benchmark problems, the LAB algorithm was applied for solving
challenging engineering problems, including abrasive water jet machining,
electric discharge machining, micro-machining processes, and process parameter
optimization for turning titanium alloy in a minimum quantity lubrication
environment. The results were superior to the other algorithms compared such as
Firefly Algorithm, Variations of Co-hort Intelligence, Genetic Algorithm,
Simulated Annealing, Particle Swarm Optimisation, and Multi-Cohort
Intelligence. The results from this study highlighted that the LAB outperforms
the other algorithms in terms of function evaluations and computational time.
The prominent features of the LAB algorithm along with its limitations are also
discussed.
Related papers
- Reinforced In-Context Black-Box Optimization [64.25546325063272]
RIBBO is a method to reinforce-learn a BBO algorithm from offline data in an end-to-end fashion.
RIBBO employs expressive sequence models to learn the optimization histories produced by multiple behavior algorithms and tasks.
Central to our method is to augment the optimization histories with textitregret-to-go tokens, which are designed to represent the performance of an algorithm based on cumulative regret over the future part of the histories.
arXiv Detail & Related papers (2024-02-27T11:32:14Z) - GOOSE Algorithm: A Powerful Optimization Tool for Real-World Engineering Challenges and Beyond [4.939986309170004]
The GOOSE algorithm is benchmarked on 19 well-known test functions.
The proposed algorithm is tested on 10 modern benchmark functions.
The achieved findings attest to the proposed algorithm's superior performance.
arXiv Detail & Related papers (2023-07-19T19:14:25Z) - Hybrid ACO-CI Algorithm for Beam Design problems [0.4397520291340694]
A novel hybrid version of the Ant colony optimization (ACO) method is developed using the sample space reduction technique of the Cohort Intelligence (CI) algorithm.
The proposed work could be investigate for real world applications encompassing domains of engineering, and health care problems.
arXiv Detail & Related papers (2023-03-29T04:37:14Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - A socio-physics based hybrid metaheuristic for solving complex
non-convex constrained optimization problems [0.19662978733004596]
It is necessary to critically validate the proposed constrained optimization techniques.
The search is different as it involves a large number of linear constraints and non-type inequality.
The first CI-based algorithm incorporates a self-adaptive penalty approach.
The second algorithm combines CI-SAPF with the referred properties of the future.
arXiv Detail & Related papers (2022-09-02T07:46:46Z) - Amortized Implicit Differentiation for Stochastic Bilevel Optimization [53.12363770169761]
We study a class of algorithms for solving bilevel optimization problems in both deterministic and deterministic settings.
We exploit a warm-start strategy to amortize the estimation of the exact gradient.
By using this framework, our analysis shows these algorithms to match the computational complexity of methods that have access to an unbiased estimate of the gradient.
arXiv Detail & Related papers (2021-11-29T15:10:09Z) - ES-Based Jacobian Enables Faster Bilevel Optimization [53.675623215542515]
Bilevel optimization (BO) has arisen as a powerful tool for solving many modern machine learning problems.
Existing gradient-based methods require second-order derivative approximations via Jacobian- or/and Hessian-vector computations.
We propose a novel BO algorithm, which adopts Evolution Strategies (ES) based method to approximate the response Jacobian matrix in the hypergradient of BO.
arXiv Detail & Related papers (2021-10-13T19:36:50Z) - Provably Faster Algorithms for Bilevel Optimization [54.83583213812667]
Bilevel optimization has been widely applied in many important machine learning applications.
We propose two new algorithms for bilevel optimization.
We show that both algorithms achieve the complexity of $mathcalO(epsilon-1.5)$, which outperforms all existing algorithms by the order of magnitude.
arXiv Detail & Related papers (2021-06-08T21:05:30Z) - Identifying Co-Adaptation of Algorithmic and Implementational
Innovations in Deep Reinforcement Learning: A Taxonomy and Case Study of
Inference-based Algorithms [15.338931971492288]
We focus on a series of inference-based actor-critic algorithms to decouple their algorithmic innovations and implementation decisions.
We identify substantial performance drops whenever implementation details are mismatched for algorithmic choices.
Results show which implementation details are co-adapted and co-evolved with algorithms.
arXiv Detail & Related papers (2021-03-31T17:55:20Z) - A Two-stage Framework and Reinforcement Learning-based Optimization
Algorithms for Complex Scheduling Problems [54.61091936472494]
We develop a two-stage framework, in which reinforcement learning (RL) and traditional operations research (OR) algorithms are combined together.
The scheduling problem is solved in two stages, including a finite Markov decision process (MDP) and a mixed-integer programming process, respectively.
Results show that the proposed algorithms could stably and efficiently obtain satisfactory scheduling schemes for agile Earth observation satellite scheduling problems.
arXiv Detail & Related papers (2021-03-10T03:16:12Z) - A survey on dragonfly algorithm and its applications in engineering [29.190512851078218]
The dragonfly algorithm was developed in 2016. It is one of the algorithms used by researchers to optimize an extensive series of uses and applications in various areas.
This work addressed the robustness of the method to solve real-world optimization issues, and its deficiency to improve complex optimization problems.
arXiv Detail & Related papers (2020-02-19T20:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.