The Evolution of Learning Algorithms for Artificial Neural Networks
- URL: http://arxiv.org/abs/2512.01203v1
- Date: Mon, 01 Dec 2025 02:38:40 GMT
- Title: The Evolution of Learning Algorithms for Artificial Neural Networks
- Authors: Jonathan Baxter,
- Abstract summary: We investigate a neural network model in which weights between computational nodes are genetically modified according to a local learning rule.<n>We show how learning behaviour emerges as a distributed property of the entire network.<n>Finally the utility of genetic algorithms as a tool of discovery is discussed.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we investigate a neural network model in which weights between computational nodes are modified according to a local learning rule. To determine whether local learning rules are sufficient for learning, we encode the network architectures and learning dynamics genetically and then apply selection pressure to evolve networks capable of learning the four boolean functions of one variable. The successful networks are analysed and we show how learning behaviour emerges as a distributed property of the entire network. Finally the utility of genetic algorithms as a tool of discovery is discussed.
Related papers
- Shrinkage Initialization for Smooth Learning of Neural Networks [14.176838845627744]
An improved approach to neural learning is presented, which adopts the shrinkage approach to initialize the transformation of each layer of networks.<n>It can be universally adapted for the structures of any networks with random layers, while stable performance can be attained.
arXiv Detail & Related papers (2025-04-12T07:11:35Z) - Feature Learning beyond the Lazy-Rich Dichotomy: Insights from Representational Geometry [7.517013801971377]
Integrating task-relevant information into neural representations is a fundamental ability of both biological and artificial intelligence systems.<n>Recent theories have categorized learning into two regimes: the rich regime, where neural networks actively learn task-relevant features, and the lazy regime, where networks behave like random feature models.<n>We introduce an analysis framework to study feature learning via the geometry of neural representations.
arXiv Detail & Related papers (2025-03-23T15:39:56Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - When Deep Learning Meets Polyhedral Theory: A Survey [5.59187625600025]
In the past decade, deep became the prevalent methodology for predictive modeling thanks to the remarkable accuracy of deep neural learning.<n>Meanwhile, the structure of neural networks converged back to simplerwise and linear functions.
arXiv Detail & Related papers (2023-04-29T11:46:53Z) - How and what to learn:The modes of machine learning [7.085027463060304]
We propose a new approach, namely the weight pathway analysis (WPA), to study the mechanism of multilayer neural networks.
WPA shows that a neural network stores and utilizes information in a "holographic" way, that is, the network encodes all training samples in a coherent structure.
It is found that hidden-layer neurons self-organize into different classes in the later stages of the learning process.
arXiv Detail & Related papers (2022-02-28T14:39:06Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Structures for Deep Neural Networks [99.8331363309895]
We propose to adopt the efficient coding principle, rooted in information theory and developed in computational neuroscience.
We show that sparse coding can effectively maximize the entropy of the output signals.
Our experiments on a public image classification dataset demonstrate that using the structure learned from scratch by our proposed algorithm, one can achieve a classification accuracy comparable to the best expert-designed structure.
arXiv Detail & Related papers (2021-05-27T12:27:24Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Neural networks adapting to datasets: learning network size and topology [77.34726150561087]
We introduce a flexible setup allowing for a neural network to learn both its size and topology during the course of a gradient-based training.
The resulting network has the structure of a graph tailored to the particular learning task and dataset.
arXiv Detail & Related papers (2020-06-22T12:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.