Modularity in NEAT Reinforcement Learning Networks
- URL: http://arxiv.org/abs/2205.06451v1
- Date: Fri, 13 May 2022 05:18:18 GMT
- Title: Modularity in NEAT Reinforcement Learning Networks
- Authors: Humphrey Munn, Marcus Gallagher
- Abstract summary: This paper shows that "NeuroEvolution of Augmenting Topologies" (NEAT) networks seem to rapidly increase in modularity over time.
Surprisingly, NEAT tends towards increasingly modular networks even when network fitness converges.
It was shown that the ideal level of network modularity in the explored parameter space is highly dependent on other network variables.
- Score: 4.9444321684311925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modularity is essential to many well-performing structured systems, as it is
a useful means of managing complexity [8]. An analysis of modularity in neural
networks produced by machine learning algorithms can offer valuable insight
into the workings of such algorithms and how modularity can be leveraged to
improve performance. However, this property is often overlooked in the
neuroevolutionary literature, so the modular nature of many learning algorithms
is unknown. This property was assessed on the popular algorithm "NeuroEvolution
of Augmenting Topologies" (NEAT) for standard simulation benchmark control
problems due to NEAT's ability to optimise network topology. This paper shows
that NEAT networks seem to rapidly increase in modularity over time with the
rate and convergence dependent on the problem. Interestingly, NEAT tends
towards increasingly modular networks even when network fitness converges. It
was shown that the ideal level of network modularity in the explored parameter
space is highly dependent on other network variables, dispelling theories that
modularity has a straightforward relationship to network performance. This is
further proven in this paper by demonstrating that rewarding modularity
directly did not improve fitness.
Related papers
- Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning [0.0]
We show that for a given classical, non-modular recurrent neural network (RNN), an equivalent modular network will perform better across multiple metrics.
We demonstrate that the inductive bias introduced by the modular topology is strong enough for the network to perform well even when the connectivity within modules is fixed.
Our findings suggest that gradual modular growth of RNNs could provide advantages for learning increasingly complex tasks on evolutionary timescales.
arXiv Detail & Related papers (2024-06-10T13:44:07Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Generalization and Estimation Error Bounds for Model-based Neural
Networks [78.88759757988761]
We show that the generalization abilities of model-based networks for sparse recovery outperform those of regular ReLU networks.
We derive practical design rules that allow to construct model-based networks with guaranteed high generalization.
arXiv Detail & Related papers (2023-04-19T16:39:44Z) - Neural Attentive Circuits [93.95502541529115]
We introduce a general purpose, yet modular neural architecture called Neural Attentive Circuits (NACs)
NACs learn the parameterization and a sparse connectivity of neural modules without using domain knowledge.
NACs achieve an 8x speedup at inference time while losing less than 3% performance.
arXiv Detail & Related papers (2022-10-14T18:00:07Z) - Learn to Communicate with Neural Calibration: Scalability and
Generalization [10.775558382613077]
We propose a scalable and generalizable neural calibration framework for future wireless system design.
The proposed neural calibration framework is applied to solve challenging resource management problems in massive multiple-input multiple-output (MIMO) systems.
arXiv Detail & Related papers (2021-10-01T09:00:25Z) - Online Training of Spiking Recurrent Neural Networks with Phase-Change
Memory Synapses [1.9809266426888898]
Training spiking neural networks (RNNs) on dedicated neuromorphic hardware is still an open challenge.
We present a simulation framework of differential-architecture arrays based on an accurate and comprehensive Phase-Change Memory (PCM) device model.
We train a spiking RNN whose weights are emulated in the presented simulation framework, using a recently proposed e-prop learning rule.
arXiv Detail & Related papers (2021-08-04T01:24:17Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Are Neural Nets Modular? Inspecting Functional Modularity Through
Differentiable Weight Masks [10.0444013205203]
Understanding if and how NNs are modular could provide insights into how to improve them.
Current inspection methods, however, fail to link modules to their functionality.
arXiv Detail & Related papers (2020-10-05T15:04:11Z) - Modeling from Features: a Mean-field Framework for Over-parameterized
Deep Neural Networks [54.27962244835622]
This paper proposes a new mean-field framework for over- parameterized deep neural networks (DNNs)
In this framework, a DNN is represented by probability measures and functions over its features in the continuous limit.
We illustrate the framework via the standard DNN and the Residual Network (Res-Net) architectures.
arXiv Detail & Related papers (2020-07-03T01:37:16Z) - An Ode to an ODE [78.97367880223254]
We present a new paradigm for Neural ODE algorithms, called ODEtoODE, where time-dependent parameters of the main flow evolve according to a matrix flow on the group O(d)
This nested system of two flows provides stability and effectiveness of training and provably solves the gradient vanishing-explosion problem.
arXiv Detail & Related papers (2020-06-19T22:05:19Z) - A Modular Neural Network Based Deep Learning Approach for MIMO Signal
Detection [12.769554897969307]
artificial neural network (ANN) assisted multiple-input multiple-output (MIMO) signal detection can be modeled as ANN-assisted lossy vector quantization (VQ)
We propose a novel modular neural network based approach, termed MNNet, where the whole network is formed by a set of pre-defined ANN modules.
Our simulation results show that the MNNet approach largely improves the deep-learning capacity with near-optimal performance in various cases.
arXiv Detail & Related papers (2020-04-01T12:56:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.