Chaotic Fitness Dependent Optimizer for Planning and Engineering Design
- URL: http://arxiv.org/abs/2110.08067v1
- Date: Sat, 21 Aug 2021 12:14:02 GMT
- Title: Chaotic Fitness Dependent Optimizer for Planning and Engineering Design
- Authors: Hardi M. Mohammed, Tarik A. Rashid
- Abstract summary: Fitness Dependent (FDO) is a metaheuristic algorithm that mimics the reproduction behavior of the bee swarm in finding better hives.
This paper aims at improving the performance of FDO, thus, the chaotic theory is used inside FDO to propose Chaotic FDO (CFDO)
Ten chaotic maps are used in the CFDO to consider which of them are performing well to avoid local optima and finding global optima.
- Score: 1.1802674324027231
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Fitness Dependent Optimizer (FDO) is a recent metaheuristic algorithm that
mimics the reproduction behavior of the bee swarm in finding better hives. This
algorithm is similar to Particle Swarm Optimization (PSO) but it works
differently. The algorithm is very powerful and has better results compared to
other common metaheuristic algorithms. This paper aims at improving the
performance of FDO, thus, the chaotic theory is used inside FDO to propose
Chaotic FDO (CFDO). Ten chaotic maps are used in the CFDO to consider which of
them are performing well to avoid local optima and finding global optima. New
technic is used to conduct population in specific limitation since FDO technic
has a problem to amend population. The proposed CFDO is evaluated by using 10
benchmark functions from CEC2019. Finally, the results show that the ability of
CFDO is improved. Singer map has a great impact on improving CFDO while the
Tent map is the worst. Results show that CFDO is superior to GA, FDO, and CSO.
Both CEC2013 and CEC2005 are used to evaluate CFDO. Finally, the proposed CFDO
is applied to classical engineering problems, such as pressure vessel design
and the result shows that CFDO can handle the problem better than WOA, GWO,
FDO, and CGWO. Besides, CFDO is applied to solve the task assignment problem
and then compared to the original FDO. The results prove that CFDO has better
capability to solve the problem.
Related papers
- Modified-Improved Fitness Dependent Optimizer for Complex and Engineering Problems [5.078139820108554]
Fitness dependent (FDO) is considered one of the novel swarm intelligent algorithms.
This study proposes a modified version of IFDO, called M-IFDO.
M-IFDO is compared against five state-of-the-art algorithms.
arXiv Detail & Related papers (2024-06-27T07:47:23Z) - Functional Graphical Models: Structure Enables Offline Data-Driven Optimization [111.28605744661638]
We show how structure can enable sample-efficient data-driven optimization.
We also present a data-driven optimization algorithm that infers the FGM structure itself.
arXiv Detail & Related papers (2024-01-08T22:33:14Z) - Multi objective Fitness Dependent Optimizer Algorithm [19.535715565093764]
This paper proposes the multi objective variant of the recently introduced fitness dependent (FDO)
The algorithm is called a Multi objective Fitness Dependent (MOFDO) and is equipped with all five types of knowledge (situational, normative, topographical, domain, and historical knowledge) as in FDO.
It is observed that the proposed algorithm successfully provides a wide variety of well-distributed feasible solutions, which enable the decision-makers to have more applicable-comfort choices to consider.
arXiv Detail & Related papers (2023-01-26T06:33:53Z) - Bayesian Optimization for Macro Placement [48.55456716632735]
We develop a novel approach to macro placement using Bayesian optimization (BO) over sequence pairs.
BO is a machine learning technique that uses a probabilistic surrogate model and an acquisition function.
We demonstrate our algorithm on the fixed-outline macro placement problem with the half-perimeter wire length objective.
arXiv Detail & Related papers (2022-07-18T06:17:06Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - Fitness Dependent Optimizer for IoT Healthcare using Adapted Parameters:
A Case Study Implementation [0.629786844297945]
This chapter discusses a case study on Fitness Dependentwarm or so-called FDO and adapting its parameters to the Internet of Things (IoT) healthcare.
Other algorithms are evaluated and compared to FDO as Genetic Algorithm (GA) and Particle Swarm Optimization (PSO) in the original work.
The target of this chapter's enhancement is to adapt the IoT healthcare framework based on FDO to spawn effective IoT healthcare applications.
arXiv Detail & Related papers (2022-05-18T16:18:57Z) - Large-scale Optimization of Partial AUC in a Range of False Positive
Rates [51.12047280149546]
The area under the ROC curve (AUC) is one of the most widely used performance measures for classification models in machine learning.
We develop an efficient approximated gradient descent method based on recent practical envelope smoothing technique.
Our proposed algorithm can also be used to minimize the sum of some ranked range loss, which also lacks efficient solvers.
arXiv Detail & Related papers (2022-03-03T03:46:18Z) - Bayesian Algorithm Execution: Estimating Computable Properties of
Black-box Functions Using Mutual Information [78.78486761923855]
In many real world problems, we want to infer some property of an expensive black-box function f, given a budget of T function evaluations.
We present a procedure, InfoBAX, that sequentially chooses queries that maximize mutual information with respect to the algorithm's output.
On these problems, InfoBAX uses up to 500 times fewer queries to f than required by the original algorithm.
arXiv Detail & Related papers (2021-04-19T17:22:11Z) - Revisiting Bayesian Optimization in the light of the COCO benchmark [1.4467794332678539]
This article reports a large investigation about the effects on the performance of (Gaussian process based) BO of common and less common design choices.
The code developed for this study makes the new version (v2.1.1) of the R package DiceOptim available on CRAN.
arXiv Detail & Related papers (2021-03-30T19:45:18Z) - Understanding and Resolving Performance Degradation in Graph
Convolutional Networks [105.14867349802898]
Graph Convolutional Network (GCN) stacks several layers and in each layer performs a PROPagation operation (PROP) and a TRANsformation operation (TRAN) for learning node representations over graph-structured data.
GCNs tend to suffer performance drop when the model gets deep.
We study performance degradation of GCNs by experimentally examining how stacking only TRANs or PROPs works.
arXiv Detail & Related papers (2020-06-12T12:12:12Z) - Improved Fitness-Dependent Optimizer Algorithm [0.9990687944474739]
The fitness-dependent (FDO) algorithm was recently introduced in 2019.
An improved FDO algorithm is presented in this work.
To prove the practicability of the IFDO, it is used in real-world applications.
arXiv Detail & Related papers (2020-01-16T21:50:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.