Hybrid Genetic Algorithm and Hill Climbing Optimization for the Neural
Network
- URL: http://arxiv.org/abs/2308.13099v1
- Date: Thu, 24 Aug 2023 22:03:18 GMT
- Title: Hybrid Genetic Algorithm and Hill Climbing Optimization for the Neural
Network
- Authors: Krutika Sarode, Shashidhar Reddy Javaji
- Abstract summary: We propose a hybrid model combining genetic algorithm and hill climbing algorithm for optimizing Convolutional Neural Networks (CNNs) on the CIFAR-100 dataset.
The proposed hybrid model achieves better accuracy with fewer generations compared to the standard algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose a hybrid model combining genetic algorithm and hill
climbing algorithm for optimizing Convolutional Neural Networks (CNNs) on the
CIFAR-100 dataset. The proposed model utilizes a population of chromosomes that
represent the hyperparameters of the CNN model. The genetic algorithm is used
for selecting and breeding the fittest chromosomes to generate new offspring.
The hill climbing algorithm is then applied to the offspring to further
optimize their hyperparameters. The mutation operation is introduced to
diversify the population and to prevent the algorithm from getting stuck in
local optima. The Genetic Algorithm is used for global search and exploration
of the search space, while Hill Climbing is used for local optimization of
promising solutions. The objective function is the accuracy of the trained
neural network on the CIFAR-100 test set. The performance of the hybrid model
is evaluated by comparing it with the standard genetic algorithm and
hill-climbing algorithm. The experimental results demonstrate that the proposed
hybrid model achieves better accuracy with fewer generations compared to the
standard algorithms. Therefore, the proposed hybrid model can be a promising
approach for optimizing CNN models on large datasets.
Related papers
- Randomized Geometric Algebra Methods for Convex Neural Networks [45.318490912354825]
We introduce randomized algorithms to Clifford's Geometric Algebra, generalizing randomized linear algebra to hypercomplex vector spaces.
This novel approach has many implications in machine learning, including training neural networks to global optimality via convex optimization.
arXiv Detail & Related papers (2024-06-04T22:22:39Z) - An Efficient High-Dimensional Gene Selection Approach based on Binary
Horse Herd Optimization Algorithm for Biological Data Classification [1.1510009152620668]
The Horse Herd Optimization Algorithm (HOA) is a new meta-heuristic algorithm based on the behaviors of horses at different ages.
This paper proposes a binary version of the HOA in order to solve discrete problems and select prominent feature subsets.
The proposed hybrid method (MRMR-BHOA) demonstrates superior performance in terms of accuracy and minimum selected features.
arXiv Detail & Related papers (2023-08-18T19:40:59Z) - Genetically Modified Wolf Optimization with Stochastic Gradient Descent
for Optimising Deep Neural Networks [0.0]
This research aims to analyze an alternative approach to optimizing neural network (NN) weights, with the use of population-based metaheuristic algorithms.
A hybrid between Grey Wolf (GWO) and Genetic Modified Algorithms (GA) is explored, in conjunction with Gradient Descent (SGD)
This algorithm allows for a combination between exploitation and exploration, whilst also tackling the issue of high-dimensionality.
arXiv Detail & Related papers (2023-01-21T13:22:09Z) - Massively Parallel Genetic Optimization through Asynchronous Propagation
of Populations [50.591267188664666]
Propulate is an evolutionary optimization algorithm and software package for global optimization.
We provide an MPI-based implementation of our algorithm, which features variants of selection, mutation, crossover, and migration.
We find that Propulate is up to three orders of magnitude faster without sacrificing solution accuracy.
arXiv Detail & Related papers (2023-01-20T18:17:34Z) - Tree ensemble kernels for Bayesian optimization with known constraints
over mixed-feature spaces [54.58348769621782]
Tree ensembles can be well-suited for black-box optimization tasks such as algorithm tuning and neural architecture search.
Two well-known challenges in using tree ensembles for black-box optimization are (i) effectively quantifying model uncertainty for exploration and (ii) optimizing over the piece-wise constant acquisition function.
Our framework performs as well as state-of-the-art methods for unconstrained black-box optimization over continuous/discrete features and outperforms competing methods for problems combining mixed-variable feature spaces and known input constraints.
arXiv Detail & Related papers (2022-07-02T16:59:37Z) - Local policy search with Bayesian optimization [73.0364959221845]
Reinforcement learning aims to find an optimal policy by interaction with an environment.
Policy gradients for local search are often obtained from random perturbations.
We develop an algorithm utilizing a probabilistic model of the objective function and its gradient.
arXiv Detail & Related papers (2021-06-22T16:07:02Z) - Genetic-algorithm-optimized neural networks for gravitational wave
classification [0.0]
We propose a new method for hyperparameter optimization based on genetic algorithms (GAs)
We show that the GA can discover high-quality architectures when the initial hyper parameter seed values are far from a good solution.
Using genetic algorithm optimization to refine an existing network should be especially useful if the problem context changes.
arXiv Detail & Related papers (2020-10-09T03:14:20Z) - Genetic optimization algorithms applied toward mission computability
models [0.3655021726150368]
Genetic algorithms are computations based and low cost to compute.
We describe our genetic optimization algorithms to a mission-critical and constraints-aware problem.
arXiv Detail & Related papers (2020-05-27T00:45:24Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - GeneCAI: Genetic Evolution for Acquiring Compact AI [36.04715576228068]
Deep Neural Networks (DNNs) are evolving towards more complex architectures to achieve higher inference accuracy.
Model compression techniques can be leveraged to efficiently deploy such compute-intensive architectures on resource-limited mobile devices.
This paper introduces GeneCAI, a novel optimization method that automatically learns how to tune per-layer compression hyper- parameters.
arXiv Detail & Related papers (2020-04-08T20:56:37Z) - Towards Better Understanding of Adaptive Gradient Algorithms in
Generative Adversarial Nets [71.05306664267832]
Adaptive algorithms perform gradient updates using the history of gradients and are ubiquitous in training deep neural networks.
In this paper we analyze a variant of OptimisticOA algorithm for nonconcave minmax problems.
Our experiments show that adaptive GAN non-adaptive gradient algorithms can be observed empirically.
arXiv Detail & Related papers (2019-12-26T22:10:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.