Synthesizing multi-layer perceptron network with ant lion,
biogeography-based dragonfly algorithm evolutionary strategy invasive weed
and league champion optimization hybrid algorithms in predicting heating load
in residential buildings
- URL: http://arxiv.org/abs/2102.08928v1
- Date: Sat, 13 Feb 2021 14:06:55 GMT
- Title: Synthesizing multi-layer perceptron network with ant lion,
biogeography-based dragonfly algorithm evolutionary strategy invasive weed
and league champion optimization hybrid algorithms in predicting heating load
in residential buildings
- Authors: Hossein Moayedi, Amir Mosavi
- Abstract summary: The significance of heating load (HL) accurate approximation is the primary motivation of this research.
The proposed models are through multi-layer perceptron network (MLP) with ant lion optimization (ALO)
Biogeography-based optimization (BBO) featured as the most capable optimization technique, followed by ALO (OS = 27) and ES (OS = 20)
- Score: 1.370633147306388
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The significance of heating load (HL) accurate approximation is the primary
motivation of this research to distinguish the most efficient predictive model
among several neural-metaheuristic models. The proposed models are through
synthesizing multi-layer perceptron network (MLP) with ant lion optimization
(ALO), biogeography-based optimization (BBO), dragonfly algorithm (DA),
evolutionary strategy (ES), invasive weed optimization (IWO), and league
champion optimization (LCA) hybrid algorithms. Each ensemble is optimized in
terms of the operating population. Accordingly, the ALO-MLP, BBO-MLP, DA-MLP,
ES-MLP, IWO-MLP, and LCA-MLP presented their best performance for population
sizes of 350, 400, 200, 500, 50, and 300, respectively. The comparison was
carried out by implementing a ranking system. Based on the obtained overall
scores (OSs), the BBO (OS = 36) featured as the most capable optimization
technique, followed by ALO (OS = 27) and ES (OS = 20). Due to the efficient
performance of these algorithms, the corresponding MLPs can be promising
substitutes for traditional methods used for HL analysis.
Related papers
- Decoding-Time Language Model Alignment with Multiple Objectives [116.42095026960598]
Existing methods primarily focus on optimizing LMs for a single reward function, limiting their adaptability to varied objectives.
Here, we propose $textbfmulti-objective decoding (MOD)$, a decoding-time algorithm that outputs the next token from a linear combination of predictions.
We show why existing approaches can be sub-optimal even in natural settings and obtain optimality guarantees for our method.
arXiv Detail & Related papers (2024-06-27T02:46:30Z) - The Firefighter Algorithm: A Hybrid Metaheuristic for Optimization Problems [3.2432648012273346]
The Firefighter Optimization (FFO) algorithm is a new hybrid metaheuristic for optimization problems.
To evaluate the performance of FFO, extensive experiments were conducted, wherein the FFO was examined against 13 commonly used optimization algorithms.
The results demonstrate that FFO achieves comparative performance and, in some scenarios, outperforms commonly adopted optimization algorithms in terms of the obtained fitness, time taken for exaction, and research space covered per unit of time.
arXiv Detail & Related papers (2024-06-01T18:38:59Z) - Orthogonally Initiated Particle Swarm Optimization with Advanced Mutation for Real-Parameter Optimization [0.04096453902709291]
This article introduces an enhanced particle swarm (PSO), termed Orthogonal PSO with Mutation (OPSO-m)
It proposes an array-based learning approach to cultivate an improved initial swarm for PSO, significantly boosting the adaptability of swarm-based optimization algorithms.
The article further presents archive-based self-adaptive learning strategies, dividing the population into regular and elite subgroups.
arXiv Detail & Related papers (2024-05-21T07:16:20Z) - An Effective Networks Intrusion Detection Approach Based on Hybrid
Harris Hawks and Multi-Layer Perceptron [47.81867479735455]
This paper proposes an Intrusion Detection System (IDS) employing the Harris Hawks Optimization (HHO) to optimize Multilayer Perceptron learning.
HHO-MLP aims to select optimal parameters in its learning process to minimize intrusion detection errors in networks.
HHO-MLP showed superior performance by attaining top scores with accuracy rate of 93.17%, sensitivity level of 95.41%, and specificity percentage of 95.41%.
arXiv Detail & Related papers (2024-02-21T06:25:50Z) - Poisson Process for Bayesian Optimization [126.51200593377739]
We propose a ranking-based surrogate model based on the Poisson process and introduce an efficient BO framework, namely Poisson Process Bayesian Optimization (PoPBO)
Compared to the classic GP-BO method, our PoPBO has lower costs and better robustness to noise, which is verified by abundant experiments.
arXiv Detail & Related papers (2024-02-05T02:54:50Z) - Sample-Efficient Multi-Agent RL: An Optimization Perspective [103.35353196535544]
We study multi-agent reinforcement learning (MARL) for the general-sum Markov Games (MGs) under the general function approximation.
We introduce a novel complexity measure called the Multi-Agent Decoupling Coefficient (MADC) for general-sum MGs.
We show that our algorithm provides comparable sublinear regret to the existing works.
arXiv Detail & Related papers (2023-10-10T01:39:04Z) - Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Multi-objective hyperparameter optimization with performance uncertainty [62.997667081978825]
This paper presents results on multi-objective hyperparameter optimization with uncertainty on the evaluation of Machine Learning algorithms.
We combine the sampling strategy of Tree-structured Parzen Estimators (TPE) with the metamodel obtained after training a Gaussian Process Regression (GPR) with heterogeneous noise.
Experimental results on three analytical test functions and three ML problems show the improvement over multi-objective TPE and GPR.
arXiv Detail & Related papers (2022-09-09T14:58:43Z) - Using Fitness Dependent Optimizer for Training Multi-layer Perceptron [13.280383503879158]
This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent (FDO)
The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages.
The proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset.
arXiv Detail & Related papers (2022-01-03T10:23:17Z) - RSO: A Novel Reinforced Swarm Optimization Algorithm for Feature
Selection [0.0]
In this paper, we propose a novel feature selection algorithm named Reinforced Swarm Optimization (RSO)
This algorithm embeds the widely used Bee Swarm Optimization (BSO) algorithm along with Reinforcement Learning (RL) to maximize the reward of a superior search agent and punish the inferior ones.
The proposed method is evaluated on 25 widely known UCI datasets containing a perfect blend of balanced and imbalanced data.
arXiv Detail & Related papers (2021-07-29T17:38:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.