A Concept-Value Network as a Brain Model
- URL: http://arxiv.org/abs/1904.04579v6
- Date: Thu, 26 Sep 2024 10:53:16 GMT
- Title: A Concept-Value Network as a Brain Model
- Authors: Kieran Greer,
- Abstract summary: This paper suggests a statistical framework for describing the relations between the physical and conceptual entities of a brain-like model.
The paper suggests that features may be the electrical wiring, although chemical connections are also possible.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper suggests a statistical framework for describing the relations between the physical and conceptual entities of a brain-like model. Features and concept instances are put into context, where the paper suggests that features may be the electrical wiring, although chemical connections are also possible. With this idea, the actual length of the connection is important, because it is related to firing rates and neuron synchronization, but the signal type is less important. The paper then suggests that concepts are neuron groups that link feature sets and concept instances are determined by chemical signals from those groups. Therefore, features become the static horizontal framework of the neural system and concepts are vertically interconnected combinations of these. With regards to functionality, the neuron is then considered to be functional and the more horizontal memory structures can even be glial. This would also suggest that features can be distributed entities and not concentrated to a single area. Another aspect could be signal 'breaks' that compartmentalise a pattern and may help with neural binding.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Brain functions emerge as thermal equilibrium states of the connectome [0.3453002745786199]
A fundamental paradigm in neuroscience is that cognitive functions are shaped by the brain's structural organization.
We propose a framework in which the functional states of a structural connectome emerge as thermal equilibrium states of an algebraic quantum system.
We apply this framework to the connectome of the nematode em Caenorhabditis elegans.
arXiv Detail & Related papers (2024-08-26T12:35:16Z) - Information Processing by Neuron Populations in the Central Nervous
System: Mathematical Structure of Data and Operations [0.0]
In the intricate architecture of the mammalian central nervous system, neurons form populations.
These neuron populations' precise encoding and operations have yet to be discovered.
This work illuminates the potential of matrix embeddings in advancing our understanding in fields like cognitive science and AI.
arXiv Detail & Related papers (2023-09-05T15:52:45Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Bootstrapping Concept Formation in Small Neural Networks [2.580765958706854]
We argue that, first, Concepts are formed as closed representations, which are then consolidated by relating them to each other.
We present a model system (agent) with a small neural network that uses realistic learning rules and receives only feedback from the environment in which the agent performs virtual actions.
arXiv Detail & Related papers (2021-10-26T12:58:27Z) - Detecting Modularity in Deep Neural Networks [8.967870619902211]
We consider the problem of assessing the modularity exhibited by a partitioning of a network's neurons.
We propose two proxies for this: importance, which reflects how crucial sets of neurons are to network performance; and coherence, which reflects how consistently their neurons associate with features of the inputs.
We show that these partitionings, even ones based only on weights, reveal groups of neurons that are important and coherent.
arXiv Detail & Related papers (2021-10-13T20:33:30Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design [10.421465303670638]
This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
arXiv Detail & Related papers (2021-05-21T05:59:27Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.