Naming Schema for a Human Brain-Scale Neural Network
- URL: http://arxiv.org/abs/2109.10951v1
- Date: Wed, 22 Sep 2021 18:14:47 GMT
- Title: Naming Schema for a Human Brain-Scale Neural Network
- Authors: Morgan Schaefer, Lauren Michelin, Jeremy Kepner
- Abstract summary: Groups of artificial neurons are able to be specifically labeled in small regions for future study.
Deep neural networks have become increasingly large and sparse, allowing for the storage of large-scale neural networks with decreased costs of storage and computation.
- Score: 2.76240219662896
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks have become increasingly large and sparse, allowing for
the storage of large-scale neural networks with decreased costs of storage and
computation. Storage of a neural network with as many connections as the human
brain is possible with current versions of the high-performance Apache Accumulo
database and the Distributed Dimensional Data Model (D4M) software. Neural
networks of such large scale may be of particular interest to scientists within
the human brain Connectome community. To aid in research and understanding of
artificial neural networks that parallel existing neural networks like the
brain, a naming schema can be developed to label groups of neurons in the
artificial network that parallel those in the brain. Groups of artificial
neurons are able to be specifically labeled in small regions for future study.
Related papers
- Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Towards a Foundation Model for Brain Age Prediction using coVariance
Neural Networks [102.75954614946258]
Increasing brain age with respect to chronological age can reflect increased vulnerability to neurodegeneration and cognitive decline.
NeuroVNN is pre-trained as a regression model on healthy population to predict chronological age.
NeuroVNN adds anatomical interpretability to brain age and has a scale-free' characteristic that allows its transference to datasets curated according to any arbitrary brain atlas.
arXiv Detail & Related papers (2024-02-12T14:46:31Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Connected Hidden Neurons (CHNNet): An Artificial Neural Network for
Rapid Convergence [0.6218519716921521]
We propose a more robust model of artificial neural networks where the hidden neurons, residing in the same hidden layer, are interconnected that leads to rapid convergence.
With the experimental study of our proposed model in deep networks, we demonstrate that the model results in a noticeable increase in convergence rate compared to the conventional feed-forward neural network.
arXiv Detail & Related papers (2023-05-17T14:00:38Z) - Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
Networks [20.99799416963467]
In the human brain, neuronal diversity is an enabling factor for all kinds of biological intelligent behaviors.
In this Primer, we first discuss the preliminaries of biological neuronal diversity and the characteristics of information transmission and processing in a biological neuron.
arXiv Detail & Related papers (2023-01-23T02:23:45Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Deep Reinforcement Learning Guided Graph Neural Networks for Brain
Network Analysis [61.53545734991802]
We propose a novel brain network representation framework, namely BN-GNN, which searches for the optimal GNN architecture for each brain network.
Our proposed BN-GNN improves the performance of traditional GNNs on different brain network analysis tasks.
arXiv Detail & Related papers (2022-03-18T07:05:27Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Neural Networks, Artificial Intelligence and the Computational Brain [0.0]
This study explores the concept of ANNs as a simulator of the biological neuron.
It also explores why brain-like intelligence is needed and how it differs from computational framework.
arXiv Detail & Related papers (2020-12-25T05:56:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.