A Theory of Natural Intelligence
- URL: http://arxiv.org/abs/2205.00002v1
- Date: Fri, 22 Apr 2022 10:27:52 GMT
- Title: A Theory of Natural Intelligence
- Authors: Christoph von der Malsburg, Thilo Stadelmann, Benjamin F. Grewe
- Abstract summary: In contrast to current AI technology, natural intelligence is far superior in terms of learning speed, generalization capabilities, autonomy and creativity.
How are these strengths, by what means are ideas and imagination produced in natural neural networks?
- Score: 3.9901365062418312
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Introduction: In contrast to current AI technology, natural intelligence --
the kind of autonomous intelligence that is realized in the brains of animals
and humans to attain in their natural environment goals defined by a repertoire
of innate behavioral schemata -- is far superior in terms of learning speed,
generalization capabilities, autonomy and creativity. How are these strengths,
by what means are ideas and imagination produced in natural neural networks?
Methods: Reviewing the literature, we put forward the argument that both our
natural environment and the brain are of low complexity, that is, require for
their generation very little information and are consequently both highly
structured. We further argue that the structures of brain and natural
environment are closely related.
Results: We propose that the structural regularity of the brain takes the
form of net fragments (self-organized network patterns) and that these serve as
the powerful inductive bias that enables the brain to learn quickly, generalize
from few examples and bridge the gap between abstractly defined general goals
and concrete situations.
Conclusions: Our results have important bearings on open problems in
artificial neural network research.
Related papers
- Evolution imposes an inductive bias that alters and accelerates learning dynamics [49.1574468325115]
We investigate the effect of evolutionary optimization on the learning dynamics of neural networks.<n>We combined algorithms natural selection and online learning to produce a method for evolutionarily conditioning artificial neural networks.<n>Results suggest evolution constitutes an inductive bias that tunes neural systems to enable rapid learning.
arXiv Detail & Related papers (2025-05-15T18:50:57Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Nature's Insight: A Novel Framework and Comprehensive Analysis of Agentic Reasoning Through the Lens of Neuroscience [11.174550573411008]
We propose a novel neuroscience-inspired framework for agentic reasoning.<n>We apply this framework to systematically classify and analyze existing AI reasoning methods.<n>We propose new neural-inspired reasoning methods, analogous to chain-of-thought prompting.
arXiv Detail & Related papers (2025-05-07T14:25:46Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Psychology of Artificial Intelligence: Epistemological Markers of the Cognitive Analysis of Neural Networks [0.0]
The psychology of artificial intelligence, as predicted by Asimov (1950), aims to study this AI probing and explainability-sensitive matter.
A prerequisite for examining the latter is to clarify some milestones regarding the cognitive status we can attribute to its phenomenology.
arXiv Detail & Related papers (2024-07-04T12:53:05Z) - A Review of Findings from Neuroscience and Cognitive Psychology as
Possible Inspiration for the Path to Artificial General Intelligence [0.0]
This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods.
Despite the impressive advancements achieved by deep learning models, they still have shortcomings in abstract reasoning and causal understanding.
arXiv Detail & Related papers (2024-01-03T09:46:36Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Towards the Neuroevolution of Low-level Artificial General Intelligence [5.2611228017034435]
We argue that the search for Artificial General Intelligence (AGI) should start from a much lower level than human-level intelligence.
Our hypothesis is that learning occurs through sensory feedback when an agent acts in an environment.
We evaluate a method to evolve a biologically-inspired artificial neural network that learns from environment reactions.
arXiv Detail & Related papers (2022-07-27T15:30:50Z) - Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge
Representation and Reasoning [11.048601659933249]
How neural networks in the human brain represent commonsense knowledge is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence.
This work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks.
The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network.
arXiv Detail & Related papers (2022-07-11T05:22:38Z) - An Introductory Review of Spiking Neural Network and Artificial Neural
Network: From Biological Intelligence to Artificial Intelligence [4.697611383288171]
A kind of spiking neural network with biological interpretability is gradually receiving wide attention.
This review hopes to attract different researchers and advance the development of brain-inspired intelligence and artificial intelligence.
arXiv Detail & Related papers (2022-04-09T09:34:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.