A New Strategy for Artificial Intelligence: Training Foundation Models Directly on Human Brain Data
- URL: http://arxiv.org/abs/2601.12053v1
- Date: Sat, 17 Jan 2026 13:38:51 GMT
- Title: A New Strategy for Artificial Intelligence: Training Foundation Models Directly on Human Brain Data
- Authors: Maƫl Donoso,
- Abstract summary: We explore a new strategy for artificial intelligence: moving beyond surface-level statistical regularities by training foundation models directly on human brain data.<n>In this paper, we classify the current limitations of foundation models, as well as the promising brain regions and cognitive processes that could be leveraged to address them.<n>We also discuss the potential implications for agents, artificial general intelligence, and artificial superintelligence, as well as the ethical, social, and technical challenges and opportunities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While foundation models have achieved remarkable results across a diversity of domains, they still rely on human-generated data, such as text, as a fundamental source of knowledge. However, this data is ultimately the product of human brains, the filtered projection of a deeper neural complexity. In this paper, we explore a new strategy for artificial intelligence: moving beyond surface-level statistical regularities by training foundation models directly on human brain data. We hypothesize that neuroimaging data could open a window into elements of human cognition that are not accessible through observable actions, and argue that this additional knowledge could be used, alongside classical training data, to overcome some of the current limitations of foundation models. While previous research has demonstrated the possibility to train classical machine learning or deep learning models on neural patterns, this path remains largely unexplored for high-level cognitive functions. Here, we classify the current limitations of foundation models, as well as the promising brain regions and cognitive processes that could be leveraged to address them, along four levels: perception, valuation, execution, and integration. Then, we propose two methods that could be implemented to prioritize the use of limited neuroimaging data for strategically chosen, high-value steps in foundation model training: reinforcement learning from human brain (RLHB) and chain of thought from human brain (CoTHB). We also discuss the potential implications for agents, artificial general intelligence, and artificial superintelligence, as well as the ethical, social, and technical challenges and opportunities. We argue that brain-trained foundation models could represent a realistic and effective middle ground between continuing to scale current architectures and exploring alternative, neuroscience-inspired solutions.
Related papers
- Intelligence Foundation Model: A New Perspective to Approach Artificial General Intelligence [55.07411490538404]
We propose a new perspective for approaching artificial general intelligence (AGI) through an intelligence foundation model (IFM)<n>IFM aims to acquire the underlying mechanisms of intelligence by learning directly from diverse intelligent behaviors.
arXiv Detail & Related papers (2025-11-13T09:28:41Z) - Artificial intelligence as a surrogate brain: Bridging neural dynamical models and data [9.300290334520481]
Recent breakthroughs in artificial intelligence (AI) are reshaping the way we construct computational counterparts of the brain, giving rise to a new class of surrogate brains''<n>We introduce a unified framework of constructing an AI-based surrogate brain that integrates forward modeling, inverse problem solving, and model evaluation.<n>We highlight that the learned surrogate brain serves as a simulation platform for dynamical systems analysis, virtual perturbation, and model-guided neurostimulation.
arXiv Detail & Related papers (2025-10-11T18:23:10Z) - Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [78.61382193420914]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Shifting Attention to You: Personalized Brain-Inspired AI Models [3.0128071072792366]
We show that integrating human behavioral insights and millisecond scale neural data within a fine tuned CLIP based model over doubles behavioral performance compared to the unmodified CLIP baseline.<n>Our work establishes a novel, interpretable framework for designing adaptive AI systems, with broad implications for neuroscience, personalized medicine, and human-computer interaction.
arXiv Detail & Related papers (2025-02-07T04:55:31Z) - Augmenting learning in neuro-embodied systems through neurobiological first principles [42.810158068175646]
We describe recent bioinspired models, learning rules, and architectures for augmenting artificial neural networks.<n>We propose a framework for augmenting ANNs, which has the potential to bridge the gap between neuroscience and AI.<n>We show how integrating biophysical principles into task-driven spiking neural networks and neuromorphic systems provides scalable solutions.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Improving deep learning with prior knowledge and cognitive models: A
survey on enhancing explainability, adversarial robustness and zero-shot
learning [0.0]
We review current and emerging knowledge-informed and brain-inspired cognitive systems for realizing adversarial defenses.
Brain-inspired cognition methods use computational models that mimic the human mind to enhance intelligent behavior in artificial agents and autonomous robots.
arXiv Detail & Related papers (2024-03-11T18:11:00Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Achieving More Human Brain-Like Vision via Human EEG Representational Alignment [3.860467813810253]
We present 'Re(presentational)Al(ignment)net', a vision model aligned with human brain activity based on non-invasive EEG.<n>Our innovative image-to-brain multi-layer encoding framework advances human neural alignment by optimizing multiple model layers.
arXiv Detail & Related papers (2024-01-30T18:18:41Z) - A Review of Findings from Neuroscience and Cognitive Psychology as
Possible Inspiration for the Path to Artificial General Intelligence [0.0]
This review aims to contribute to the quest for artificial general intelligence by examining neuroscience and cognitive psychology methods.
Despite the impressive advancements achieved by deep learning models, they still have shortcomings in abstract reasoning and causal understanding.
arXiv Detail & Related papers (2024-01-03T09:46:36Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Brain-inspired Computational Intelligence via Predictive Coding [73.42407863671565]
Predictive coding (PC) has shown promising properties that make it potentially valuable for the machine learning community.<n>PC-like algorithms are starting to be present in multiple sub-fields of machine learning and AI at large.
arXiv Detail & Related papers (2023-08-15T16:37:16Z) - NeuroCERIL: Robotic Imitation Learning via Hierarchical Cause-Effect
Reasoning in Programmable Attractor Neural Networks [2.0646127669654826]
We present NeuroCERIL, a brain-inspired neurocognitive architecture that uses a novel hypothetico-deductive reasoning procedure.
We show that NeuroCERIL can learn various procedural skills in a simulated robotic imitation learning domain.
We conclude that NeuroCERIL is a viable neural model of human-like imitation learning.
arXiv Detail & Related papers (2022-11-11T19:56:11Z) - WenLan 2.0: Make AI Imagine via a Multimodal Foundation Model [74.4875156387271]
We develop a novel foundation model pre-trained with huge multimodal (visual and textual) data.
We show that state-of-the-art results can be obtained on a wide range of downstream tasks.
arXiv Detail & Related papers (2021-10-27T12:25:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.