Augmenting learning in neuro-embodied systems through neurobiological first principles
- URL: http://arxiv.org/abs/2407.04525v5
- Date: Mon, 03 Nov 2025 13:54:00 GMT
- Title: Augmenting learning in neuro-embodied systems through neurobiological first principles
- Authors: Alejandro Rodriguez-Garcia, Anindya Ghosh, Jie Mei, Srikanth Ramaswamy,
- Abstract summary: We describe recent bioinspired models, learning rules, and architectures for augmenting artificial neural networks.<n>We propose a framework for augmenting ANNs, which has the potential to bridge the gap between neuroscience and AI.<n>We show how integrating biophysical principles into task-driven spiking neural networks and neuromorphic systems provides scalable solutions.
- Score: 42.810158068175646
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent progress in artificial intelligence (AI) has been driven by insights from physics and neuroscience, particularly through the development of artificial neural networks (ANNs) capable of complex cognitive tasks such as vision and language processing. Despite these advances, they struggle with continual learning, adaptable knowledge transfer, robustness, and resource efficiency -- capabilities that biological systems handle seamlessly. Specifically, neuromorphic systems and artificial neural networks often overlook two key biophysical properties of neural circuits: neuronal diversity and cell-specific neuromodulation. These mechanisms, essential for regulating dynamic learning across brain scales, allow neuromodulators to introduce degeneracy in biological neural networks, ensuring stability and adaptability under changing conditions. In this article, we summarize recent bioinspired models, learning rules, and architectures, and propose a framework for augmenting ANNs, which has the potential to bridge the gap between neuroscience and AI through neurobiological first principles. Our proposed dual-framework approach leverages spiking neural networks to emulate diverse spiking behaviors and dendritic compartmental dynamics, thereby simulating the morphological and functional diversity of neuronal computations. Finally, we outline how integrating these biophysical principles into task-driven spiking neural networks and neuromorphic systems provides scalable solutions for continual learning, adaptability, robustness, and resource-efficiency. Additionally, this approach will not only provide insights into how emergent behaviors arise in neural networks but also catalyze the development of more efficient, reliable, and intelligent neuromorphic systems and robotic agents.
Related papers
- Guiding Sparse Neural Networks with Neurobiological Principles to Elicit Biologically Plausible Representations [1.0787328610467803]
We introduce a biologically inspired learning rule that naturally integrates neurobiological principles.<n>Our model enhances robustness against adversarial attacks and demonstrates superior generalization.<n>Preliminary results suggest that this approach could extend from feature-specific to task-specific encoding.
arXiv Detail & Related papers (2026-03-03T18:27:37Z) - Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence [55.07411490538404]
We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
arXiv Detail & Related papers (2025-11-13T09:28:41Z) - Evolution imposes an inductive bias that alters and accelerates learning dynamics [49.1574468325115]
We investigate the effect of evolutionary optimization on the learning dynamics of neural networks.<n>We combined algorithms natural selection and online learning to produce a method for evolutionarily conditioning artificial neural networks.<n>Results suggest evolution constitutes an inductive bias that tunes neural systems to enable rapid learning.
arXiv Detail & Related papers (2025-05-15T18:50:57Z) - Neuroplasticity in Artificial Intelligence -- An Overview and Inspirations on Drop In & Out Learning [42.086960710257564]
We explore how neurogenesis, neuroapoptosis, and neuroplasticity can inspire future AI advances.<n>We introduce the concepts of dropin'' for neurogenesis and revisit dropout'' and structural pruning for neuroapoptosis.<n>We conclude by advocating for greater research efforts in this interdisciplinary domain.
arXiv Detail & Related papers (2025-03-27T12:09:04Z) - Improving the adaptive and continuous learning capabilities of artificial neural networks: Lessons from multi-neuromodulatory dynamics [43.35924697803789]
Biological organisms excel in acquiring, transferring, and retaining knowledge while adapting to dynamic environments.
This study explores how neuromodulation, a fundamental feature of biological learning systems, can help address challenges such as catastrophic forgetting.
By integrating multi-scale neuromodulation, we aim to bridge the gap between biological learning and artificial systems.
arXiv Detail & Related papers (2025-01-12T10:10:01Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Research Advances and New Paradigms for Biology-inspired Spiking Neural Networks [8.315801422499861]
Spiking neural networks (SNNs) are gaining popularity in the computational simulation and artificial intelligence fields.
This paper explores the historical development of SNN and concludes that these two fields are intersecting and merging rapidly.
arXiv Detail & Related papers (2024-08-26T03:37:48Z) - Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling [7.863029550014263]
Astrocyte-Modulated Spiking Neural Network (AstroSNN) exhibits exceptional performance in tasks involving memory retention and natural language generation.
AstroSNN shows low latency, high throughput, and reduced memory usage in practical applications.
arXiv Detail & Related papers (2023-12-12T06:56:31Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Towards NeuroAI: Introducing Neuronal Diversity into Artificial Neural
Networks [20.99799416963467]
In the human brain, neuronal diversity is an enabling factor for all kinds of biological intelligent behaviors.
In this Primer, we first discuss the preliminaries of biological neuronal diversity and the characteristics of information transmission and processing in a biological neuron.
arXiv Detail & Related papers (2023-01-23T02:23:45Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Towards efficient end-to-end speech recognition with
biologically-inspired neural networks [10.457580011403289]
We introduce neural connectivity concepts emulating the axo-somatic and the axo-axonic synapses.
We demonstrate for the first time, that a biologically realistic implementation of a large-scale ASR model can yield competitive performance levels.
arXiv Detail & Related papers (2021-10-04T21:24:10Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.