Combining Neuro-Evolution of Augmenting Topologies with Convolutional
Neural Networks
- URL: http://arxiv.org/abs/2211.16978v1
- Date: Thu, 20 Oct 2022 18:41:57 GMT
- Title: Combining Neuro-Evolution of Augmenting Topologies with Convolutional
Neural Networks
- Authors: Jan Hohenheim, Mathias Fischler, Sara Zarubica, Jeremy Stucki
- Abstract summary: We combine NeuroEvolution of Augmenting Topologies (NEAT) with Convolutional Neural Networks (CNNs) and propose such a system using blocks of Residual Networks (ResNets)
We explain how our suggested system can only be built once additional optimizations have been made, as genetic algorithms are way more demanding than training per backpropagation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current deep convolutional networks are fixed in their topology. We explore
the possibilites of making the convolutional topology a parameter itself by
combining NeuroEvolution of Augmenting Topologies (NEAT) with Convolutional
Neural Networks (CNNs) and propose such a system using blocks of Residual
Networks (ResNets). We then explain how our suggested system can only be built
once additional optimizations have been made, as genetic algorithms are way
more demanding than training per backpropagation. On the way there we explain
most of those buzzwords and offer a gentle and brief introduction to the most
important modern areas of machine learning
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - When Deep Learning Meets Polyhedral Theory: A Survey [6.899761345257773]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.
Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - An Artificial Neural Network Functionalized by Evolution [2.0625936401496237]
We propose a hybrid model which combines the tensor calculus of feed-forward neural networks with Pseudo-Darwinian mechanisms.
This allows for finding topologies that are well adapted for elaboration of strategies, control problems or pattern recognition tasks.
In particular, the model can provide adapted topologies at early evolutionary stages, and'structural convergence', which can found applications in robotics, big-data and artificial life.
arXiv Detail & Related papers (2022-05-16T14:49:58Z) - Overcoming Catastrophic Forgetting in Graph Neural Networks [50.900153089330175]
Catastrophic forgetting refers to the tendency that a neural network "forgets" the previous learned knowledge upon learning new tasks.
We propose a novel scheme dedicated to overcoming this problem and hence strengthen continual learning in graph neural networks (GNNs)
At the heart of our approach is a generic module, termed as topology-aware weight preserving(TWP)
arXiv Detail & Related papers (2020-12-10T22:30:25Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Topological Insights into Sparse Neural Networks [16.515620374178535]
We introduce an approach to understand and compare sparse neural network topologies from the perspective of graph theory.
We first propose Neural Network Sparse Topology Distance (NNSTD) to measure the distance between different sparse neural networks.
We show that adaptive sparse connectivity can always unveil a plenitude of sparse sub-networks with very different topologies which outperform the dense model.
arXiv Detail & Related papers (2020-06-24T22:27:21Z) - Verifying Recurrent Neural Networks using Invariant Inference [0.0]
We propose a novel approach for verifying properties of a widespread variant of neural networks, called recurrent neural networks.
Our approach is based on the inference of invariants, which allow us to reduce the complex problem of verifying recurrent networks into simpler, non-recurrent problems.
arXiv Detail & Related papers (2020-04-06T08:08:24Z) - NeuroFabric: Identifying Ideal Topologies for Training A Priori Sparse
Networks [2.398608007786179]
Long training times of deep neural networks are a bottleneck in machine learning research.
We provide a theoretical foundation for the choice of intra-layer topology.
We show that seemingly similar topologies can often have a large difference in attainable accuracy.
arXiv Detail & Related papers (2020-02-19T18:29:18Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.