A new Sparse Auto-encoder based Framework using Grey Wolf Optimizer for
Data Classification Problem
- URL: http://arxiv.org/abs/2201.12493v1
- Date: Sat, 29 Jan 2022 04:28:30 GMT
- Title: A new Sparse Auto-encoder based Framework using Grey Wolf Optimizer for
Data Classification Problem
- Authors: Ahmad Mozaffer Karim
- Abstract summary: Gray wolf optimization (GWO) is applied to train sparse auto-encoders.
Model is validated by employing several popular Gene expression databases.
Results reveal that the performance of the trained model using GWO outperforms on both conventional models and models trained with most popular metaheuristic algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the most important properties of deep auto-encoders (DAEs) is their
capability to extract high level features from row data. Hence, especially
recently, the autoencoders are preferred to be used in various classification
problems such as image and voice recognition, computer security, medical data
analysis, etc. Despite, its popularity and high performance, the training phase
of autoencoders is still a challenging task, involving to select best
parameters that let the model to approach optimal results. Different training
approaches are applied to train sparse autoencoders. Previous studies and
preliminary experiments reveal that those approaches may present remarkable
results in same problems but also disappointing results can be obtained in
other complex problems. Metaheuristic algorithms have emerged over the last two
decades and are becoming an essential part of contemporary optimization
techniques. Gray wolf optimization (GWO) is one of the current of those
algorithms and is applied to train sparse auto-encoders for this study. This
model is validated by employing several popular Gene expression databases.
Results are compared with previous state-of-the art methods studied with the
same data sets and also are compared with other popular metaheuristic
algorithms, namely, Genetic Algorithms (GA), Particle Swarm Optimization (PSO)
and Artificial Bee Colony (ABC). Results reveal that the performance of the
trained model using GWO outperforms on both conventional models and models
trained with most popular metaheuristic algorithms.
Related papers
- Large Language Models as Surrogate Models in Evolutionary Algorithms: A Preliminary Study [5.6787965501364335]
Surrogate-assisted selection is a core step in evolutionary algorithms to solve expensive optimization problems.
Traditionally, this has relied on conventional machine learning methods, leveraging historical evaluated evaluations to predict the performance of new solutions.
In this work, we propose a novel surrogate model based purely on LLM inference capabilities, eliminating the need for training.
arXiv Detail & Related papers (2024-06-15T15:54:00Z) - Discovering Preference Optimization Algorithms with and for Large Language Models [50.843710797024805]
offline preference optimization is a key method for enhancing and controlling the quality of Large Language Model (LLM) outputs.
We perform objective discovery to automatically discover new state-of-the-art preference optimization algorithms without (expert) human intervention.
Experiments demonstrate the state-of-the-art performance of DiscoPOP, a novel algorithm that adaptively blends logistic and exponential losses.
arXiv Detail & Related papers (2024-06-12T16:58:41Z) - Applications of Nature-Inspired Metaheuristic Algorithms for Tackling Optimization Problems Across Disciplines [12.664160352147293]
This paper demonstrates the usefulness of nature-inspired metaheuristic algorithms for solving a variety of challenging optimization problems in statistics.
The main goal of this paper is to show a typical metaheuristic algorithmi, like CSO-MA, is efficient for tackling many different types of optimization problems in statistics.
arXiv Detail & Related papers (2023-08-08T16:41:33Z) - Enhancing Machine Learning Model Performance with Hyper Parameter
Optimization: A Comparative Study [0.0]
One of the most critical issues in machine learning is the selection of appropriate hyper parameters for training models.
Hyper parameter optimization (HPO) is a popular topic that artificial intelligence studies have focused on recently.
In this study, classical methods, such as grid, random search and Bayesian optimization, and population-based algorithms, such as genetic algorithms and particle swarm optimization, are discussed.
arXiv Detail & Related papers (2023-02-14T10:12:10Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Towards Automated Imbalanced Learning with Deep Hierarchical
Reinforcement Learning [57.163525407022966]
Imbalanced learning is a fundamental challenge in data mining, where there is a disproportionate ratio of training samples in each class.
Over-sampling is an effective technique to tackle imbalanced learning through generating synthetic samples for the minority class.
We propose AutoSMOTE, an automated over-sampling algorithm that can jointly optimize different levels of decisions.
arXiv Detail & Related papers (2022-08-26T04:28:01Z) - An Empirical Analysis of Recurrent Learning Algorithms In Neural Lossy
Image Compression Systems [73.48927855855219]
Recent advances in deep learning have resulted in image compression algorithms that outperform JPEG and JPEG 2000 on the standard Kodak benchmark.
In this paper, we perform the first large-scale comparison of recent state-of-the-art hybrid neural compression algorithms.
arXiv Detail & Related papers (2022-01-27T19:47:51Z) - RSO: A Novel Reinforced Swarm Optimization Algorithm for Feature
Selection [0.0]
In this paper, we propose a novel feature selection algorithm named Reinforced Swarm Optimization (RSO)
This algorithm embeds the widely used Bee Swarm Optimization (BSO) algorithm along with Reinforcement Learning (RL) to maximize the reward of a superior search agent and punish the inferior ones.
The proposed method is evaluated on 25 widely known UCI datasets containing a perfect blend of balanced and imbalanced data.
arXiv Detail & Related papers (2021-07-29T17:38:04Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Cauchy-Schwarz Regularized Autoencoder [68.80569889599434]
Variational autoencoders (VAE) are a powerful and widely-used class of generative models.
We introduce a new constrained objective based on the Cauchy-Schwarz divergence, which can be computed analytically for GMMs.
Our objective improves upon variational auto-encoding models in density estimation, unsupervised clustering, semi-supervised learning, and face analysis.
arXiv Detail & Related papers (2021-01-06T17:36:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.