ElegansNet: a brief scientific report and initial experiments
- URL: http://arxiv.org/abs/2304.13538v1
- Date: Thu, 6 Apr 2023 13:51:04 GMT
- Title: ElegansNet: a brief scientific report and initial experiments
- Authors: Francesco Bardozzo, Andrea Terlizzi, Pietro Li\`o, Roberto Tagliaferri
- Abstract summary: ElegansNet is a neural network that mimics real-world neuronal network circuitry.
It generates improved deep learning systems with a topology similar to natural networks.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This research report introduces ElegansNet, a neural network that mimics
real-world neuronal network circuitry, with the goal of better understanding
the interplay between connectome topology and deep learning systems. The
proposed approach utilizes the powerful representational capabilities of living
beings' neuronal circuitry to design and generate improved deep learning
systems with a topology similar to natural networks. The Caenorhabditis elegans
connectome is used as a reference due to its completeness, reasonable size, and
functional neuron classes annotations. It is demonstrated that the connectome
of simple organisms exhibits specific functional relationships between neurons,
and once transformed into learnable tensor networks and integrated into modern
architectures, it offers bio-plausible structures that efficiently solve
complex tasks. The performance of the models is demonstrated against randomly
wired networks and compared to artificial networks ranked on global benchmarks.
In the first case, ElegansNet outperforms randomly wired networks.
Interestingly, ElegansNet models show slightly similar performance with only
those based on the Watts-Strogatz small-world property. When compared to
state-of-the-art artificial neural networks, such as transformers or
attention-based autoencoders, ElegansNet outperforms well-known deep learning
and traditional models in both supervised image classification tasks and
unsupervised hand-written digits reconstruction, achieving top-1 accuracy of
99.99% on Cifar10 and 99.84% on MNIST Unsup on the validation sets.
Related papers
- Hybrid deep additive neural networks [0.0]
We introduce novel deep neural networks that incorporate the idea of additive regression.
Our neural networks share architectural similarities with Kolmogorov-Arnold networks but are based on simpler yet flexible activation and basis functions.
We derive their universal approximation properties and demonstrate their effectiveness through simulation studies and a real-data application.
arXiv Detail & Related papers (2024-11-14T04:26:47Z) - Coding schemes in neural networks learning classification tasks [52.22978725954347]
We investigate fully-connected, wide neural networks learning classification tasks.
We show that the networks acquire strong, data-dependent features.
Surprisingly, the nature of the internal representations depends crucially on the neuronal nonlinearity.
arXiv Detail & Related papers (2024-06-24T14:50:05Z) - Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Functional Connectome: Approximating Brain Networks with Artificial
Neural Networks [1.952097552284465]
We show that trained deep neural networks are able to capture the computations performed by synthetic biological networks with high accuracy.
We show that trained deep neural networks are able to perform zero-shot generalisation in novel environments.
Our study reveals a novel and promising direction in systems neuroscience, and can be expanded upon with a multitude of downstream applications.
arXiv Detail & Related papers (2022-11-23T13:12:13Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Creating Powerful and Interpretable Models withRegression Networks [2.2049183478692584]
We propose a novel architecture, Regression Networks, which combines the power of neural networks with the understandability of regression analysis.
We demonstrate that the models exceed the state-of-the-art performance of interpretable models on several benchmark datasets.
arXiv Detail & Related papers (2021-07-30T03:37:00Z) - Structure and Performance of Fully Connected Neural Networks: Emerging
Complex Network Properties [0.8484871864277639]
Complex Network (CN) techniques are proposed to analyze the structure and performance of fully connected neural networks.
We build a dataset with 4 thousand models and their respective CN properties.
Our findings suggest that CN properties play a critical role in the performance of fully connected neural networks.
arXiv Detail & Related papers (2021-07-29T14:53:52Z) - Firefly Neural Architecture Descent: a General Approach for Growing
Neural Networks [50.684661759340145]
Firefly neural architecture descent is a general framework for progressively and dynamically growing neural networks.
We show that firefly descent can flexibly grow networks both wider and deeper, and can be applied to learn accurate but resource-efficient neural architectures.
In particular, it learns networks that are smaller in size but have higher average accuracy than those learned by the state-of-the-art methods.
arXiv Detail & Related papers (2021-02-17T04:47:18Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.