Neuronal diversity can improve machine learning for physics and beyond
- URL: http://arxiv.org/abs/2204.04348v3
- Date: Thu, 31 Aug 2023 00:23:22 GMT
- Title: Neuronal diversity can improve machine learning for physics and beyond
- Authors: Anshul Choudhary, Anil Radhakrishnan, John F. Lindner, Sudeshna Sinha,
William L. Ditto
- Abstract summary: We construct neural networks from neurons that learn their own activation functions.
Sub-networks instantiate the neurons, which meta-learn especially efficient sets of nonlinear responses.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Diversity conveys advantages in nature, yet homogeneous neurons typically
comprise the layers of artificial neural networks. Here we construct neural
networks from neurons that learn their own activation functions, quickly
diversify, and subsequently outperform their homogeneous counterparts on image
classification and nonlinear regression tasks. Sub-networks instantiate the
neurons, which meta-learn especially efficient sets of nonlinear responses.
Examples include conventional neural networks classifying digits and
forecasting a van der Pol oscillator and physics-informed Hamiltonian neural
networks learning H\'enon-Heiles stellar orbits and the swing of a video
recorded pendulum clock. Such \textit{learned diversity} provides examples of
dynamical systems selecting diversity over uniformity and elucidates the role
of diversity in natural and artificial systems.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Expressivity of Neural Networks with Random Weights and Learned Biases [44.02417750529102]
Recent work has pushed the bounds of universal approximation by showing that arbitrary functions can similarly be learned by tuning smaller subsets of parameters.
We provide theoretical and numerical evidence demonstrating that feedforward neural networks with fixed random weights can be trained to perform multiple tasks by learning biases only.
Our results are relevant to neuroscience, where they demonstrate the potential for behaviourally relevant changes in dynamics without modifying synaptic weights.
arXiv Detail & Related papers (2024-07-01T04:25:49Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Dive into the Power of Neuronal Heterogeneity [8.6837371869842]
We show the challenges faced by backpropagation-based methods in optimizing Spiking Neural Networks (SNNs) and achieve more robust optimization of heterogeneous neurons in random networks using an Evolutionary Strategy (ES)
We find that membrane time constants play a crucial role in neural heterogeneity, and their distribution is similar to that observed in biological experiments.
arXiv Detail & Related papers (2023-05-19T07:32:29Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.