Neuroevolution in Deep Learning: The Role of Neutrality
- URL: http://arxiv.org/abs/2102.08475v1
- Date: Tue, 16 Feb 2021 22:29:59 GMT
- Title: Neuroevolution in Deep Learning: The Role of Neutrality
- Authors: Edgar Galv\'an
- Abstract summary: Methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNN)
Evolutionary Algorithms (EAs) are gaining momentum as a computationally feasible method for the automated optimisation of DNNs.
This work discusses how neutrality, given certain conditions, can help to speed up the training/design of deep neural networks.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A variety of methods have been applied to the architectural configuration and
learning or training of artificial deep neural networks (DNN). These methods
play a crucial role in the success or failure of the DNN for most problems and
applications. Evolutionary Algorithms (EAs) are gaining momentum as a
computationally feasible method for the automated optimisation of DNNs.
Neuroevolution is a term which describes these processes of automated
configuration and training of DNNs using EAs. However, the automatic design
and/or training of these modern neural networks through evolutionary algorithms
is computanalli expensive. Kimura's neutral theory of molecular evolution
states that the majority of evolutionary changes at molecular level are the
result of random fixation of selectively neutral mutations. A mutation from one
gene to another is neutral if it does not affect the phenotype. This work
discusses how neutrality, given certain conditions, can help to speed up the
training/design of deep neural networks.
Related papers
- Convolutional Conditional Neural Processes [6.532867867011488]
This thesis advances neural processes in three ways.
ConvNPs improve data efficiency by building in a symmetry called translationvariance.
GNPs directly parametrise dependencies in the predictions of a neural process.
AR CNPs train a neural process without any modifications to the model or training procedure and, at test time, roll out the model in an autoregressive fashion.
arXiv Detail & Related papers (2024-08-18T19:53:38Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Is it conceivable that neurogenesis, neural Darwinism, and species
evolution could all serve as inspiration for the creation of evolutionary
deep neural networks? [0.0]
Deep Neural Networks (DNNs) are built using artificial neural networks.
This paper emphasizes the importance of what we call two-dimensional brain evolution.
We also highlight the connection between the dropout method which is widely-used in regularizing DNNs and neurogenesis of the brain.
arXiv Detail & Related papers (2023-04-06T14:51:20Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Neuroevolution of Physics-Informed Neural Nets: Benchmark Problems and
Comparative Results [25.12291688711645]
Physics-informed neural networks (PINNs) are one of the key techniques at the forefront of recent advances.
PINNs' unique loss formulations lead to a high degree of complexity and ruggedness that may not be conducive for gradient descent.
Neuroevolution algorithms, with their superior global search capacity, may be a better choice for PINNs.
arXiv Detail & Related papers (2022-12-15T05:54:16Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Neuroevolutionary Multi-objective approaches to Trajectory Prediction in
Autonomous Vehicles [2.9552300389898094]
We focus on the intersection of neuroevolution and evolutionary multi-objective optimization.
We study a rich convolutional neural network composed of a CNN and Long-short Term Memory network.
We show how these objectives have either a positive or detrimental effect in neuroevolution for trajectory prediction in autonomous vehicles.
arXiv Detail & Related papers (2022-05-04T15:03:26Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Neuroevolution in Deep Neural Networks: Current Trends and Future
Challenges [0.0]
A variety of methods have been applied to the architectural configuration and learning or training of artificial deep neural networks (DNN)
Evolutionary Algorithms (EAs) are gaining momentum as a computationally feasible method for the automated optimisation and training of DNNs.
This paper presents a comprehensive survey, discussion and evaluation of the state-of-the-art works on using EAs for architectural configuration and training of DNNs.
arXiv Detail & Related papers (2020-06-09T17:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.