Evolving Neural Networks through a Reverse Encoding Tree
- URL: http://arxiv.org/abs/2002.00539v2
- Date: Tue, 31 Mar 2020 21:00:11 GMT
- Title: Evolving Neural Networks through a Reverse Encoding Tree
- Authors: Haoling Zhang, Chao-Han Huck Yang, Hector Zenil, Narsis A. Kiani, Yue
Shen, Jesper N. Tegner
- Abstract summary: This paper advances a method which incorporates a type of topological edge coding, named Reverse HANG Tree (RET), for evolving scalable neural networks efficiently.
Using RET, two types of approaches -- NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search encoding (GS-NEAT) -- have been designed to solve problems in benchmark continuous learning environments.
- Score: 9.235550900581764
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: NeuroEvolution is one of the most competitive evolutionary learning
frameworks for designing novel neural networks for use in specific tasks, such
as logic circuit design and digital gaming. However, the application of
benchmark methods such as the NeuroEvolution of Augmenting Topologies (NEAT)
remains a challenge, in terms of their computational cost and search time
inefficiency. This paper advances a method which incorporates a type of
topological edge coding, named Reverse Encoding Tree (RET), for evolving
scalable neural networks efficiently. Using RET, two types of approaches --
NEAT with Binary search encoding (Bi-NEAT) and NEAT with Golden-Section search
encoding (GS-NEAT) -- have been designed to solve problems in benchmark
continuous learning environments such as logic gates, Cartpole, and Lunar
Lander, and tested against classical NEAT and FS-NEAT as baselines.
Additionally, we conduct a robustness test to evaluate the resilience of the
proposed NEAT algorithms. The results show that the two proposed strategies
deliver improved performance, characterized by (1) a higher accumulated reward
within a finite number of time steps; (2) using fewer episodes to solve
problems in targeted environments, and (3) maintaining adaptive robustness
under noisy perturbations, which outperform the baselines in all tested cases.
Our analysis also demonstrates that RET expends potential future research
directions in dynamic environments. Code is available from
https://github.com/HaolingZHANG/ReverseEncodingTree.
Related papers
- Developing Convolutional Neural Networks using a Novel Lamarckian Co-Evolutionary Algorithm [1.2425910171551517]
This paper introduces LCoDeepNEAT, an instantiation of Lamarckian genetic algorithms.
LCoDeepNEAT co-evolves CNN architectures and their respective final layer weights.
Our method yields a notable improvement in the classification accuracy of candidate solutions, ranging from 2% to 5.6%.
arXiv Detail & Related papers (2024-10-29T19:26:23Z) - Stochastic Spiking Neural Networks with First-to-Spike Coding [7.955633422160267]
Spiking Neural Networks (SNNs) are known for their bio-plausibility and energy efficiency.
In this work, we explore the merger of novel computing and information encoding schemes in SNN architectures.
We investigate the tradeoffs of our proposal in terms of accuracy, inference latency, spiking sparsity, energy consumption, and datasets.
arXiv Detail & Related papers (2024-04-26T22:52:23Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Acceleration techniques for optimization over trained neural network
ensembles [1.0323063834827415]
We study optimization problems where the objective function is modeled through feedforward neural networks with rectified linear unit activation.
We present a mixed-integer linear program based on existing popular big-$M$ formulations for optimizing over a single neural network.
arXiv Detail & Related papers (2021-12-13T20:50:54Z) - A Continuous Optimisation Benchmark Suite from Neural Network Regression [0.0]
Training neural networks is an optimisation task that has gained prominence with the recent successes of deep learning.
gradient descent variants are by far the most common choice with their trusted good performance on large-scale machine learning tasks.
We contribute CORNN, a suite for benchmarking the performance of any continuous black-box algorithm on neural network training problems.
arXiv Detail & Related papers (2021-09-12T20:24:11Z) - Parallelization Techniques for Verifying Neural Networks [52.917845265248744]
We introduce an algorithm based on the verification problem in an iterative manner and explore two partitioning strategies.
We also introduce a highly parallelizable pre-processing algorithm that uses the neuron activation phases to simplify the neural network verification problems.
arXiv Detail & Related papers (2020-04-17T20:21:47Z) - Binary Neural Networks: A Survey [126.67799882857656]
The binary neural network serves as a promising technique for deploying deep models on resource-limited devices.
The binarization inevitably causes severe information loss, and even worse, its discontinuity brings difficulty to the optimization of the deep network.
We present a survey of these algorithms, mainly categorized into the native solutions directly conducting binarization, and the optimized ones using techniques like minimizing the quantization error, improving the network loss function, and reducing the gradient error.
arXiv Detail & Related papers (2020-03-31T16:47:20Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.