Is it conceivable that neurogenesis, neural Darwinism, and species
evolution could all serve as inspiration for the creation of evolutionary
deep neural networks?
- URL: http://arxiv.org/abs/2304.03122v2
- Date: Tue, 11 Apr 2023 13:58:32 GMT
- Title: Is it conceivable that neurogenesis, neural Darwinism, and species
evolution could all serve as inspiration for the creation of evolutionary
deep neural networks?
- Authors: Mohammed Al-Rawi
- Abstract summary: Deep Neural Networks (DNNs) are built using artificial neural networks.
This paper emphasizes the importance of what we call two-dimensional brain evolution.
We also highlight the connection between the dropout method which is widely-used in regularizing DNNs and neurogenesis of the brain.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Deep Neural Networks (DNNs) are built using artificial neural networks. They
are part of machine learning methods that are capable of learning from data
that have been used in a wide range of applications. DNNs are mainly
handcrafted and they usually contain numerous layers. Research frontier has
emerged that concerns automated construction of DNNs via evolutionary
algorithms. This paper emphasizes the importance of what we call
two-dimensional brain evolution and how it can inspire two dimensional DNN
evolutionary modeling. We also highlight the connection between the dropout
method which is widely-used in regularizing DNNs and neurogenesis of the brain,
and how these concepts could benefit DNNs evolution.The paper concludes with
several recommendations for enhancing the automatic construction of DNNs.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - A survey on learning models of spiking neural membrane systems and spiking neural networks [0.0]
Spiking neural networks (SNN) are a biologically inspired model of neural networks with certain brain-like properties.
In SNN, communication between neurons takes place through the spikes and spike trains.
SNPS can be considered a branch of SNN based more on the principles of formal automata.
arXiv Detail & Related papers (2024-03-27T14:26:41Z) - Curriculum Design Helps Spiking Neural Networks to Classify Time Series [16.402675046686834]
Spiking Neural Networks (SNNs) have a greater potential for modeling time series data than Artificial Neural Networks (ANNs)
In this work, enlighten by brain-inspired science, we find that, not only the structure but also the learning process should be human-like.
arXiv Detail & Related papers (2023-12-26T02:04:53Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Exploiting Noise as a Resource for Computation and Learning in Spiking
Neural Networks [32.0086664373154]
This study introduces the noisy spiking neural network (NSNN) and the noise-driven learning rule (NDL)
NSNN provides a theoretical framework that yields scalable, flexible, and reliable computation.
arXiv Detail & Related papers (2023-05-25T13:21:26Z) - Models Developed for Spiking Neural Networks [0.5801044612920815]
Spiking neural networks (SNNs) have been around for a long time, and they have been investigated to understand the dynamics of the brain.
In this work, we reviewed the structures and performances of SNNs on image classification tasks.
The comparisons illustrate that these networks show great capabilities for more complicated problems.
arXiv Detail & Related papers (2022-12-08T16:18:53Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - Explainability Tools Enabling Deep Learning in Future In-Situ Real-Time
Planetary Explorations [58.720142291102135]
Deep learning (DL) has proven to be an effective machine learning and computer vision technique.
Most of the Deep Neural Network (DNN) architectures are so complex that they are considered a 'black box'
In this paper, we used integrated gradients to describe the attributions of each neuron to the output classes.
It provides a set of explainability tools (ET) that opens the black box of a DNN so that the individual contribution of neurons to category classification can be ranked and visualized.
arXiv Detail & Related papers (2022-01-15T07:10:00Z) - Neuroevolution of a Recurrent Neural Network for Spatial and Working
Memory in a Simulated Robotic Environment [57.91534223695695]
We evolved weights in a biologically plausible recurrent neural network (RNN) using an evolutionary algorithm to replicate the behavior and neural activity observed in rats.
Our method demonstrates how the dynamic activity in evolved RNNs can capture interesting and complex cognitive behavior.
arXiv Detail & Related papers (2021-02-25T02:13:52Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - A neural network walks into a lab: towards using deep nets as models for
human behavior [0.0]
We argue why deep neural network models have the potential to be interesting models of human behavior.
We discuss how that potential can be more fully realized.
arXiv Detail & Related papers (2020-05-02T11:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.