Thousand-Brains Systems: Sensorimotor Intelligence for Rapid, Robust Learning and Inference
- URL: http://arxiv.org/abs/2507.04494v1
- Date: Sun, 06 Jul 2025 18:11:07 GMT
- Title: Thousand-Brains Systems: Sensorimotor Intelligence for Rapid, Robust Learning and Inference
- Authors: Niels Leadholm, Viviane Clay, Scott Knudstrup, Hojae Lee, Jeff Hawkins,
- Abstract summary: Current AI systems achieve impressive performance on many tasks, yet they lack core attributes of biological intelligence.<n> Neuroscience theory suggests that mammals evolved flexible intelligence through the replication of a semi-independent, sensorimotor module.<n>We present Monty, the first implementation of a thousand-brains system.
- Score: 0.8288727568301834
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Current AI systems achieve impressive performance on many tasks, yet they lack core attributes of biological intelligence, including rapid, continual learning, representations grounded in sensorimotor interactions, and structured knowledge that enables efficient generalization. Neuroscience theory suggests that mammals evolved flexible intelligence through the replication of a semi-independent, sensorimotor module, a functional unit known as a cortical column. To address the disparity between biological and artificial intelligence, thousand-brains systems were proposed as a means of mirroring the architecture of cortical columns and their interactions. In the current work, we evaluate the unique properties of Monty, the first implementation of a thousand-brains system. We focus on 3D object perception, and in particular, the combined task of object recognition and pose estimation. Utilizing the YCB dataset of household objects, we first assess Monty's use of sensorimotor learning to build structured representations, finding that these enable robust generalization. These representations include an emphasis on classifying objects by their global shape, as well as a natural ability to detect object symmetries. We then explore Monty's use of model-free and model-based policies to enable rapid inference by supporting principled movements. We find that such policies complement Monty's modular architecture, a design that can accommodate communication between modules to further accelerate inference speed via a novel `voting' algorithm. Finally, we examine Monty's use of associative, Hebbian-like binding to enable rapid, continual, and computationally efficient learning, properties that compare favorably to current deep learning architectures. While Monty is still in a nascent stage of development, these findings support thousand-brains systems as a powerful and promising new approach to AI.
Related papers
- Thinking Beyond Tokens: From Brain-Inspired Intelligence to Cognitive Foundations for Artificial General Intelligence and its Societal Impact [31.63205881016299]
This paper offers a cross-disciplinary synthesis of artificial intelligence, cognitive neuroscience, psychology, generative models, and agent-based systems.<n>We analyze the architectural and cognitive foundations of general intelligence, highlighting the role of modular reasoning, persistent memory, and multi-agent coordination.<n>We identify key scientific, technical, and ethical challenges on the path to Artificial General Intelligence.
arXiv Detail & Related papers (2025-07-01T16:52:25Z) - Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems [133.45145180645537]
The advent of large language models (LLMs) has catalyzed a transformative shift in artificial intelligence.<n>As these agents increasingly drive AI research and practical applications, their design, evaluation, and continuous improvement present intricate, multifaceted challenges.<n>This survey provides a comprehensive overview, framing intelligent agents within a modular, brain-inspired architecture.
arXiv Detail & Related papers (2025-03-31T18:00:29Z) - Large Language Model Agent: A Survey on Methodology, Applications and Challenges [88.3032929492409]
Large Language Model (LLM) agents, with goal-driven behaviors and dynamic adaptation capabilities, potentially represent a critical pathway toward artificial general intelligence.<n>This survey systematically deconstructs LLM agent systems through a methodology-centered taxonomy.<n>Our work provides a unified architectural perspective, examining how agents are constructed, how they collaborate, and how they evolve over time.
arXiv Detail & Related papers (2025-03-27T12:50:17Z) - The Thousand Brains Project: A New Paradigm for Sensorimotor Intelligence [0.5032786223328559]
We outline the Thousand Brains Project, an ongoing research effort to develop an alternative, complementary form of AI.<n>We present an early version of a thousand-brains system, a sensorimotor agent that is uniquely suited to quickly learn a wide range of tasks.<n>We outline the key principles motivating the design of thousand-brains systems and provide details about the implementation of Monty, our first instantiation of such a system.
arXiv Detail & Related papers (2024-12-24T11:32:37Z) - SciAgents: Automating scientific discovery through multi-agent intelligent graph reasoning [0.0]
A key challenge in artificial intelligence is the creation of systems capable of autonomously advancing scientific understanding.
We present SciAgents, an approach that leverages three core concepts.
The framework autonomously generates and refines research hypotheses, elucidating underlying mechanisms, design principles, and unexpected material properties.
Our case studies demonstrate scalable capabilities to combine generative AI, ontological representations, and multi-agent modeling, harnessing a swarm of intelligence' similar to biological systems.
arXiv Detail & Related papers (2024-09-09T12:25:10Z) - Incremental procedural and sensorimotor learning in cognitive humanoid
robots [52.77024349608834]
This work presents a cognitive agent that can learn procedures incrementally.
We show the cognitive functions required in each substage and how adding new functions helps address tasks previously unsolved by the agent.
Results show that this approach is capable of solving complex tasks incrementally.
arXiv Detail & Related papers (2023-04-30T22:51:31Z) - Attention: Marginal Probability is All You Need? [0.0]
We propose an alternative Bayesian foundation for attentional mechanisms.
We show how this unifies different attentional architectures in machine learning.
We hope this work will guide more sophisticated intuitions into the key properties of attention architectures.
arXiv Detail & Related papers (2023-04-07T14:38:39Z) - Modular Deep Learning [120.36599591042908]
Transfer learning has recently become the dominant paradigm of machine learning.
It remains unclear how to develop models that specialise towards multiple tasks without incurring negative interference.
Modular deep learning has emerged as a promising solution to these challenges.
arXiv Detail & Related papers (2023-02-22T18:11:25Z) - Is a Modular Architecture Enough? [80.32451720642209]
We provide a thorough assessment of common modular architectures, through the lens of simple and known modular data distributions.
We highlight the benefits of modularity and sparsity and reveal insights on the challenges faced while optimizing modular systems.
arXiv Detail & Related papers (2022-06-06T16:12:06Z) - Fast and Slow Learning of Recurrent Independent Mechanisms [80.38910637873066]
We propose a training framework in which the pieces of knowledge an agent needs and its reward function are stationary and can be re-used across tasks.
An attention mechanism dynamically selects which modules can be adapted to the current task.
We find that meta-learning the modular aspects of the proposed system greatly helps in achieving faster adaptation in a reinforcement learning setup.
arXiv Detail & Related papers (2021-05-18T17:50:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.