Modularity in NEAT Reinforcement Learning Networks
- URL: http://arxiv.org/abs/2205.06451v1
- Date: Fri, 13 May 2022 05:18:18 GMT
- Title: Modularity in NEAT Reinforcement Learning Networks
- Authors: Humphrey Munn, Marcus Gallagher
- Abstract summary: This paper shows that "NeuroEvolution of Augmenting Topologies" (NEAT) networks seem to rapidly increase in modularity over time.
Surprisingly, NEAT tends towards increasingly modular networks even when network fitness converges.
It was shown that the ideal level of network modularity in the explored parameter space is highly dependent on other network variables.
- Score: 4.9444321684311925
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modularity is essential to many well-performing structured systems, as it is
a useful means of managing complexity [8]. An analysis of modularity in neural
networks produced by machine learning algorithms can offer valuable insight
into the workings of such algorithms and how modularity can be leveraged to
improve performance. However, this property is often overlooked in the
neuroevolutionary literature, so the modular nature of many learning algorithms
is unknown. This property was assessed on the popular algorithm "NeuroEvolution
of Augmenting Topologies" (NEAT) for standard simulation benchmark control
problems due to NEAT's ability to optimise network topology. This paper shows
that NEAT networks seem to rapidly increase in modularity over time with the
rate and convergence dependent on the problem. Interestingly, NEAT tends
towards increasingly modular networks even when network fitness converges. It
was shown that the ideal level of network modularity in the explored parameter
space is highly dependent on other network variables, dispelling theories that
modularity has a straightforward relationship to network performance. This is
further proven in this paper by demonstrating that rewarding modularity
directly did not improve fitness.
Related papers
- Self-similarity Analysis in Deep Neural Networks [33.89368443432198]
Current research has found that some deep neural networks exhibit strong hierarchical self-similarity in feature representation or parameter distribution.<n>This paper proposes a complex network method based on the output features of hidden-layer neurons to investigate the self-similarity of feature networks constructed at different hidden layers.
arXiv Detail & Related papers (2025-07-23T09:01:53Z) - Breaking Neural Network Scaling Laws with Modularity [8.482423139660153]
We show how the amount of training data required to generalize varies with the intrinsic dimensionality of a task's input.
We then develop a novel learning rule for modular networks to exploit this advantage.
arXiv Detail & Related papers (2024-09-09T16:43:09Z) - Modular Growth of Hierarchical Networks: Efficient, General, and Robust Curriculum Learning [0.0]
We show that for a given classical, non-modular recurrent neural network (RNN), an equivalent modular network will perform better across multiple metrics.
We demonstrate that the inductive bias introduced by the modular topology is strong enough for the network to perform well even when the connectivity within modules is fixed.
Our findings suggest that gradual modular growth of RNNs could provide advantages for learning increasingly complex tasks on evolutionary timescales.
arXiv Detail & Related papers (2024-06-10T13:44:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Power-Enhanced Residual Network for Function Approximation and Physics-Informed Inverse Problems [0.0]
This paper introduces a novel neural network structure called the Power-Enhancing residual network.
It improves the network's capabilities for both smooth and non-smooth functions approximation in 2D and 3D settings.
Results emphasize the exceptional accuracy of the proposed Power-Enhancing residual network, particularly for non-smooth functions.
arXiv Detail & Related papers (2023-10-24T10:01:15Z) - Efficient and Flexible Neural Network Training through Layer-wise Feedback Propagation [49.44309457870649]
We present Layer-wise Feedback Propagation (LFP), a novel training principle for neural network-like predictors.
LFP decomposes a reward to individual neurons based on their respective contributions to solving a given task.
Our method then implements a greedy approach reinforcing helpful parts of the network and weakening harmful ones.
arXiv Detail & Related papers (2023-08-23T10:48:28Z) - ConCerNet: A Contrastive Learning Based Framework for Automated
Conservation Law Discovery and Trustworthy Dynamical System Prediction [82.81767856234956]
This paper proposes a new learning framework named ConCerNet to improve the trustworthiness of the DNN based dynamics modeling.
We show that our method consistently outperforms the baseline neural networks in both coordinate error and conservation metrics.
arXiv Detail & Related papers (2023-02-11T21:07:30Z) - Multi-agent Reinforcement Learning with Graph Q-Networks for Antenna
Tuning [60.94661435297309]
The scale of mobile networks makes it challenging to optimize antenna parameters using manual intervention or hand-engineered strategies.
We propose a new multi-agent reinforcement learning algorithm to optimize mobile network configurations globally.
We empirically demonstrate the performance of the algorithm on an antenna tilt tuning problem and a joint tilt and power control problem in a simulated environment.
arXiv Detail & Related papers (2023-01-20T17:06:34Z) - Learn to Communicate with Neural Calibration: Scalability and
Generalization [10.775558382613077]
We propose a scalable and generalizable neural calibration framework for future wireless system design.
The proposed neural calibration framework is applied to solve challenging resource management problems in massive multiple-input multiple-output (MIMO) systems.
arXiv Detail & Related papers (2021-10-01T09:00:25Z) - Redefining Neural Architecture Search of Heterogeneous Multi-Network
Models by Characterizing Variation Operators and Model Components [71.03032589756434]
We investigate the effect of different variation operators in a complex domain, that of multi-network heterogeneous neural models.
We characterize both the variation operators, according to their effect on the complexity and performance of the model; and the models, relying on diverse metrics which estimate the quality of the different parts composing it.
arXiv Detail & Related papers (2021-06-16T17:12:26Z) - Neural Function Modules with Sparse Arguments: A Dynamic Approach to
Integrating Information across Layers [84.57980167400513]
Neural Function Modules (NFM) aims to introduce the same structural capability into deep learning.
Most of the work in the context of feed-forward networks combining top-down and bottom-up feedback is limited to classification problems.
The key contribution of our work is to combine attention, sparsity, top-down and bottom-up feedback, in a flexible algorithm.
arXiv Detail & Related papers (2020-10-15T20:43:17Z) - Are Neural Nets Modular? Inspecting Functional Modularity Through
Differentiable Weight Masks [10.0444013205203]
Understanding if and how NNs are modular could provide insights into how to improve them.
Current inspection methods, however, fail to link modules to their functionality.
arXiv Detail & Related papers (2020-10-05T15:04:11Z) - Deep Multi-Task Learning for Cooperative NOMA: System Design and
Principles [52.79089414630366]
We develop a novel deep cooperative NOMA scheme, drawing upon the recent advances in deep learning (DL)
We develop a novel hybrid-cascaded deep neural network (DNN) architecture such that the entire system can be optimized in a holistic manner.
arXiv Detail & Related papers (2020-07-27T12:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.