Grid-SD2E: A General Grid-Feedback in a System for Cognitive Learning
- URL: http://arxiv.org/abs/2304.01844v3
- Date: Sun, 10 Dec 2023 08:12:22 GMT
- Title: Grid-SD2E: A General Grid-Feedback in a System for Cognitive Learning
- Authors: Jingyi Feng and Chenming Zhang
- Abstract summary: This study is inspired in part by grid cells in creating a more general and robust grid module.
We construct an interactive and self-reinforcing cognitive system together with Bayesian reasoning.
The smallest computing unit is extracted, which is analogous to a single neuron in the brain.
- Score: 0.5221459608786241
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Comprehending how the brain interacts with the external world through
generated neural data is crucial for determining its working mechanism,
treating brain diseases, and understanding intelligence. Although many
theoretical models have been proposed, they have thus far been difficult to
integrate and develop. In this study, we were inspired in part by grid cells in
creating a more general and robust grid module and constructing an interactive
and self-reinforcing cognitive system together with Bayesian reasoning, an
approach called space-division and exploration-exploitation with grid-feedback
(Grid-SD2E). Here, a grid module can be used as an interaction medium between
the outside world and a system, as well as a self-reinforcement medium within
the system. The space-division and exploration-exploitation (SD2E) receives the
0/1 signals of a grid through its space-division (SD) module. The system
described in this paper is also a theoretical model derived from experiments
conducted by other researchers and our experience on neural decoding. Herein,
we analyse the rationality of the system based on the existing theories in both
neuroscience and cognitive science, and attempt to propose special and general
rules to explain the different interactions between people and between people
and the external world. What's more, based on this framework, the smallest
computing unit is extracted, which is analogous to a single neuron in the
brain.
Related papers
- The brain versus AI: World-model-based versatile circuit computation underlying diverse functions in the neocortex and cerebellum [0.0]
We identify similarities and convergent evolution in the brain and AI.
We propose a new theory that integrates established neuroscience theories.
Our systematic approach, insights, and theory promise groundbreaking advances in understanding the brain.
arXiv Detail & Related papers (2024-11-25T04:05:43Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Interpretable Graph Neural Networks for Connectome-Based Brain Disorder
Analysis [31.281194583900998]
We propose an interpretable framework to analyze disorder-specific Regions of Interest (ROIs) and prominent connections.
The proposed framework consists of two modules: a brain-network-oriented backbone model for disease prediction and a globally shared explanation generator.
arXiv Detail & Related papers (2022-06-30T08:02:05Z) - Neuromorphic Artificial Intelligence Systems [58.1806704582023]
Modern AI systems, based on von Neumann architecture and classical neural networks, have a number of fundamental limitations in comparison with the brain.
This article discusses such limitations and the ways they can be mitigated.
It presents an overview of currently available neuromorphic AI projects in which these limitations are overcome.
arXiv Detail & Related papers (2022-05-25T20:16:05Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - The whole brain architecture approach: Accelerating the development of
artificial general intelligence by referring to the brain [1.637145148171519]
It is difficult for an individual to design a software program that corresponds to the entire brain.
The whole-brain architecture approach divides the brain-inspired AGI development process into the task of designing the brain reference architecture.
This study proposes the Structure-constrained Interface Decomposition (SCID) method, which is a hypothesis-building method for creating a hypothetical component diagram.
arXiv Detail & Related papers (2021-03-06T04:58:12Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z) - Brain-inspired self-organization with cellular neuromorphic computing
for multimodal unsupervised learning [0.0]
We propose a brain-inspired neural system based on the reentry theory using Self-Organizing Maps and Hebbian-like learning.
We show the gain of the so-called hardware plasticity induced by the ReSOM, where the system's topology is not fixed by the user but learned along the system's experience through self-organization.
arXiv Detail & Related papers (2020-04-11T21:02:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.