Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence
- URL: http://arxiv.org/abs/2511.10119v2
- Date: Fri, 14 Nov 2025 06:21:06 GMT
- Title: Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence
- Authors: Borui Cai, Yao Zhao,
- Abstract summary: We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
- Score: 55.07411490538404
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM). Unlike existing foundation models (FMs), which specialize in pattern learning within specific domains such as language, vision, or time series, IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors. Vision, language, and other cognitive abilities are manifestations of intelligent behavior; learning from this broad range of behaviors enables the system to internalize the general principles of intelligence. Based on the fact that intelligent behaviors emerge from the collective dynamics of biological neural systems, IFM consists of two core components: a novel network architecture, termed the state neural network, which captures neuron-like dynamic processes, and a new learning objective, neuron output prediction, which trains the system to predict neuronal outputs from collective dynamics. The state neural network emulates the temporal dynamics of biological neurons, allowing the system to store, integrate, and process information over time, while the neuron output prediction objective provides a unified computational principle for learning these structural dynamics from intelligent behaviors. Together, these innovations establish a biologically grounded and computationally scalable foundation for building systems capable of generalization, reasoning, and adaptive learning across domains, representing a step toward truly AGI.
Related papers
- The brain-AI convergence: Predictive and generative world models for general-purpose computation [0.0]
Recent advances in AI systems with attention-based transformers offer a potential window into how the neocortex and cerebellum give rise to diverse functions.<n>We identify shared computational mechanisms in the attention-based neocortex and the non-attentional cerebellum.
arXiv Detail & Related papers (2025-12-02T05:03:14Z) - Mind Meets Space: Rethinking Agentic Spatial Intelligence from a Neuroscience-inspired Perspective [53.556348738917166]
Recent advances in agentic AI have led to systems capable of autonomous task execution and language-based reasoning.<n>Human spatial intelligence, rooted in integrated multisensory perception, spatial memory, and cognitive maps, enables flexible, context-aware decision-making in unstructured environments.
arXiv Detail & Related papers (2025-09-11T05:23:22Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [78.61382193420914]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Shifting Attention to You: Personalized Brain-Inspired AI Models [3.0128071072792366]
We show that integrating human behavioral insights and millisecond scale neural data within a fine tuned CLIP based model over doubles behavioral performance compared to the unmodified CLIP baseline.<n>Our work establishes a novel, interpretable framework for designing adaptive AI systems, with broad implications for neuroscience, personalized medicine, and human-computer interaction.
arXiv Detail & Related papers (2025-02-07T04:55:31Z) - Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Augmenting learning in neuro-embodied systems through neurobiological first principles [42.810158068175646]
We describe recent bioinspired models, learning rules, and architectures for augmenting artificial neural networks.<n>We propose a framework for augmenting ANNs, which has the potential to bridge the gap between neuroscience and AI.<n>We show how integrating biophysical principles into task-driven spiking neural networks and neuromorphic systems provides scalable solutions.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.