Adaptability of Improved NEAT in Variable Environments
- URL: http://arxiv.org/abs/2201.07977v2
- Date: Mon, 3 Jul 2023 00:25:57 GMT
- Title: Adaptability of Improved NEAT in Variable Environments
- Authors: Destiny Bailey
- Abstract summary: NeuroEvolution of Augmenting Topologies (NEAT) was a novel Genetic Algorithm (GA) when it was created, but has fallen aside with newer GAs outperforming it.
This paper furthers the research on this subject by implementing various versions of improved NEAT in a variable environment.
The improvements included, in every combination, are: recurrent connections, automatic feature selection, and increasing population size.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large challenge in Artificial Intelligence (AI) is training control agents
that can properly adapt to variable environments. Environments in which the
conditions change can cause issues for agents trying to operate in them.
Building algorithms that can train agents to operate in these environments and
properly deal with the changing conditions is therefore important.
NeuroEvolution of Augmenting Topologies (NEAT) was a novel Genetic Algorithm
(GA) when it was created, but has fallen aside with newer GAs outperforming it.
This paper furthers the research on this subject by implementing various
versions of improved NEAT in a variable environment to determine if NEAT can
perform well in these environments. The improvements included, in every
combination, are: recurrent connections, automatic feature selection, and
increasing population size. The recurrent connections improvement performed
extremely well. The automatic feature selection improvement was found to be
detrimental to performance, and the increasing population size improvement
lowered performance a small amount, but decreased computation requirements
noticeably.
Related papers
- MARS: Unleashing the Power of Variance Reduction for Training Large Models [56.47014540413659]
Large gradient algorithms like Adam, Adam, and their variants have been central to the development of this type of training.
We propose a framework that reconciles preconditioned gradient optimization methods with variance reduction via a scaled momentum technique.
arXiv Detail & Related papers (2024-11-15T18:57:39Z) - Cooperative coevolutionary Modified Differential Evolution with
Distance-based Selection for Large-Scale Optimization Problems in noisy
environments through an automatic Random Grouping [3.274290296343038]
We propose an automatic Random Grouping (aRG) to solve large-scale optimization problems in noisy environments.
We also introduce Modified Evolution with Distance-based Selection (MDE-DS) to enhance the ability in noisy environments.
Our proposal has broad prospects to solve LSOPs in noisy environments and can be easily extended to higher-dimensional problems.
arXiv Detail & Related papers (2022-09-02T01:37:17Z) - Adaptive Self-supervision Algorithms for Physics-informed Neural
Networks [59.822151945132525]
Physics-informed neural networks (PINNs) incorporate physical knowledge from the problem domain as a soft constraint on the loss function.
We study the impact of the location of the collocation points on the trainability of these models.
We propose a novel adaptive collocation scheme which progressively allocates more collocation points to areas where the model is making higher errors.
arXiv Detail & Related papers (2022-07-08T18:17:06Z) - Deep Surrogate Assisted Generation of Environments [7.217405582720078]
Quality diversity (QD) optimization has been proven to be an effective component of environment generation algorithms.
We propose Deep Surrogate Assisted Generation of Environments (DSAGE), a sample-efficient QD environment generation algorithm.
Results in two benchmark domains show that DSAGE significantly outperforms existing QD environment generation algorithms.
arXiv Detail & Related papers (2022-06-09T00:14:03Z) - Effective Mutation Rate Adaptation through Group Elite Selection [50.88204196504888]
This paper introduces the Group Elite Selection of Mutation Rates (GESMR) algorithm.
GESMR co-evolves a population of solutions and a population of MRs, such that each MR is assigned to a group of solutions.
With the same number of function evaluations and with almost no overhead, GESMR converges faster and to better solutions than previous approaches.
arXiv Detail & Related papers (2022-04-11T01:08:26Z) - Task-Agnostic Morphology Evolution [94.97384298872286]
Current approaches that co-adapt morphology and behavior use a specific task's reward as a signal for morphology optimization.
This often requires expensive policy optimization and results in task-dependent morphologies that are not built to generalize.
We propose a new approach, Task-Agnostic Morphology Evolution (TAME), to alleviate both of these issues.
arXiv Detail & Related papers (2021-02-25T18:59:21Z) - Automated Curriculum Learning for Embodied Agents: A Neuroevolutionary
Approach [0.0]
We demonstrate how an evolutionary algorithm can be extended with a curriculum learning process that selects automatically the environmental conditions in which the evolving agents are evaluated.
The results collected on two benchmark problems, that require to solve a task in significantly varying environmental conditions, demonstrate that the method proposed outperforms conventional algorithms and generates solutions that are robust to variations.
arXiv Detail & Related papers (2021-02-17T16:19:17Z) - Instance Weighted Incremental Evolution Strategies for Reinforcement
Learning in Dynamic Environments [11.076005074172516]
We propose a systematic incremental learning method for Evolution strategies (ES) in dynamic environments.
The goal is to adjust previously learned policy to a new one incrementally whenever the environment changes.
This paper introduces a family of scalable ES algorithms for RL domains that enables rapid learning adaptation to dynamic environments.
arXiv Detail & Related papers (2020-10-09T14:31:44Z) - Maximum Mutation Reinforcement Learning for Scalable Control [25.935468948833073]
Reinforcement Learning (RL) has demonstrated data efficiency and optimal control over large state spaces at the cost of scalable performance.
We present the Evolution-based Soft Actor-Critic (ESAC), a scalable RL algorithm.
arXiv Detail & Related papers (2020-07-24T16:29:19Z) - EOS: a Parallel, Self-Adaptive, Multi-Population Evolutionary Algorithm
for Constrained Global Optimization [68.8204255655161]
EOS is a global optimization algorithm for constrained and unconstrained problems of real-valued variables.
It implements a number of improvements to the well-known Differential Evolution (DE) algorithm.
Results prove that EOSis capable of achieving increased performance compared to state-of-the-art single-population self-adaptive DE algorithms.
arXiv Detail & Related papers (2020-07-09T10:19:22Z) - Top-k Training of GANs: Improving GAN Performance by Throwing Away Bad
Samples [67.11669996924671]
We introduce a simple (one line of code) modification to the Generative Adversarial Network (GAN) training algorithm.
When updating the generator parameters, we zero out the gradient contributions from the elements of the batch that the critic scores as least realistic'
We show that this top-k update' procedure is a generally applicable improvement.
arXiv Detail & Related papers (2020-02-14T19:27:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.