Using Fitness Dependent Optimizer for Training Multi-layer Perceptron
- URL: http://arxiv.org/abs/2201.00563v1
- Date: Mon, 3 Jan 2022 10:23:17 GMT
- Title: Using Fitness Dependent Optimizer for Training Multi-layer Perceptron
- Authors: Dosti Kh. Abbas, Tarik A. Rashid, Karmand H. Abdallaand Nebojsa
Bacanin, Abeer Alsadoon
- Abstract summary: This study presents a novel training algorithm depending upon the recently proposed Fitness Dependent (FDO)
The stability of this algorithm has been verified and performance-proofed in both the exploration and exploitation stages.
The proposed approach using FDO as a trainer can outperform the other approaches using different trainers on the dataset.
- Score: 13.280383503879158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents a novel training algorithm depending upon the recently
proposed Fitness Dependent Optimizer (FDO). The stability of this algorithm has
been verified and performance-proofed in both the exploration and exploitation
stages using some standard measurements. This influenced our target to gauge
the performance of the algorithm in training multilayer perceptron neural
networks (MLP). This study combines FDO with MLP (codename FDO-MLP) for
optimizing weights and biases to predict outcomes of students. This study can
improve the learning system in terms of the educational background of students
besides increasing their achievements. The experimental results of this
approach are affirmed by comparing with the Back-Propagation algorithm (BP) and
some evolutionary models such as FDO with cascade MLP (FDO-CMLP), Grey Wolf
Optimizer (GWO) combined with MLP (GWO-MLP), modified GWO combined with MLP
(MGWO-MLP), GWO with cascade MLP (GWO-CMLP), and modified GWO with cascade MLP
(MGWO-CMLP). The qualitative and quantitative results prove that the proposed
approach using FDO as a trainer can outperform the other approaches using
different trainers on the dataset in terms of convergence speed and local
optima avoidance. The proposed FDO-MLP approach classifies with a rate of 0.97.
Related papers
- SA-MLP: Enhancing Point Cloud Classification with Efficient Addition and Shift Operations in MLP Architectures [46.266960248570086]
Traditional neural networks heavily rely on multiplication operations, which are computationally expensive.
We propose Add-MLP and Shift-MLP, which replace multiplications with addition and shift operations, respectively, significantly enhancing computational efficiency.
This study offers an efficient and effective solution for point cloud classification, balancing performance with computational efficiency.
arXiv Detail & Related papers (2024-09-03T15:43:44Z) - Kolmogorov-Arnold Network for Online Reinforcement Learning [0.22615818641180724]
Kolmogorov-Arnold Networks (KANs) have shown potential as an alternative to Multi-Layer Perceptrons (MLPs) in neural networks.
KANs provide universal function approximation with fewer parameters and reduced memory usage.
arXiv Detail & Related papers (2024-08-09T03:32:37Z) - An Effective Networks Intrusion Detection Approach Based on Hybrid
Harris Hawks and Multi-Layer Perceptron [47.81867479735455]
This paper proposes an Intrusion Detection System (IDS) employing the Harris Hawks Optimization (HHO) to optimize Multilayer Perceptron learning.
HHO-MLP aims to select optimal parameters in its learning process to minimize intrusion detection errors in networks.
HHO-MLP showed superior performance by attaining top scores with accuracy rate of 93.17%, sensitivity level of 95.41%, and specificity percentage of 95.41%.
arXiv Detail & Related papers (2024-02-21T06:25:50Z) - Parameter and Computation Efficient Transfer Learning for
Vision-Language Pre-trained Models [79.34513906324727]
In this paper, we aim at parameter and efficient transfer learning (PCETL) for vision-language pre-trained models.
We propose a novel dynamic architecture skipping (DAS) approach towards effective PCETL.
arXiv Detail & Related papers (2023-09-04T09:34:33Z) - NTK-approximating MLP Fusion for Efficient Language Model Fine-tuning [40.994306592119266]
Fine-tuning a pre-trained language model (PLM) emerges as the predominant strategy in many natural language processing applications.
Some general approaches (e.g. quantization and distillation) have been widely studied to reduce the compute/memory of PLM fine-tuning.
We propose to coin a lightweight PLM through NTK-approximating modules in fusion.
arXiv Detail & Related papers (2023-07-18T03:12:51Z) - Model-tuning Via Prompts Makes NLP Models Adversarially Robust [97.02353907677703]
We show surprising gains in adversarial robustness enjoyed by Model-tuning Via Prompts (MVP)
MVP improves performance against adversarial substitutions by an average of 8% over standard methods.
We also conduct ablations to investigate the mechanism underlying these gains.
arXiv Detail & Related papers (2023-03-13T17:41:57Z) - Efficient Language Modeling with Sparse all-MLP [53.81435968051093]
All-MLPs can match Transformers in language modeling, but still lag behind in downstream tasks.
We propose sparse all-MLPs with mixture-of-experts (MoEs) in both feature and input (tokens)
We evaluate its zero-shot in-context learning performance on six downstream tasks, and find that it surpasses Transformer-based MoEs and dense Transformers.
arXiv Detail & Related papers (2022-03-14T04:32:19Z) - Mixing and Shifting: Exploiting Global and Local Dependencies in Vision
MLPs [84.3235981545673]
Token-mixing multi-layer perceptron (MLP) models have shown competitive performance in computer vision tasks.
We present Mix-Shift-MLP which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting.
MS-MLP achieves competitive performance in multiple vision benchmarks.
arXiv Detail & Related papers (2022-02-14T06:53:48Z) - Synthesizing multi-layer perceptron network with ant lion,
biogeography-based dragonfly algorithm evolutionary strategy invasive weed
and league champion optimization hybrid algorithms in predicting heating load
in residential buildings [1.370633147306388]
The significance of heating load (HL) accurate approximation is the primary motivation of this research.
The proposed models are through multi-layer perceptron network (MLP) with ant lion optimization (ALO)
Biogeography-based optimization (BBO) featured as the most capable optimization technique, followed by ALO (OS = 27) and ES (OS = 20)
arXiv Detail & Related papers (2021-02-13T14:06:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.