What Neuroscience Can Teach AI About Learning in Continuously Changing Environments
- URL: http://arxiv.org/abs/2507.02103v1
- Date: Wed, 02 Jul 2025 19:30:57 GMT
- Title: What Neuroscience Can Teach AI About Learning in Continuously Changing Environments
- Authors: Daniel Durstewitz, Bruno Averbeck, Georgia Koppe,
- Abstract summary: Animals constantly adapt to the ever-changing contingencies in their environments.<n>Can AI learn from neuroscience?<n>This Perspective explores this question, integrating the literature on continual and in-context learning in AI with the neuroscience of learning on behavioral tasks with shifting rules, reward probabilities, or outcomes.
- Score: 6.3111399332851414
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern AI models, such as large language models, are usually trained once on a huge corpus of data, potentially fine-tuned for a specific task, and then deployed with fixed parameters. Their training is costly, slow, and gradual, requiring billions of repetitions. In stark contrast, animals continuously adapt to the ever-changing contingencies in their environments. This is particularly important for social species, where behavioral policies and reward outcomes may frequently change in interaction with peers. The underlying computational processes are often marked by rapid shifts in an animal's behaviour and rather sudden transitions in neuronal population activity. Such computational capacities are of growing importance for AI systems operating in the real world, like those guiding robots or autonomous vehicles, or for agentic AI interacting with humans online. Can AI learn from neuroscience? This Perspective explores this question, integrating the literature on continual and in-context learning in AI with the neuroscience of learning on behavioral tasks with shifting rules, reward probabilities, or outcomes. We will outline an agenda for how specifically insights from neuroscience may inform current developments in AI in this area, and - vice versa - what neuroscience may learn from AI, contributing to the evolving field of NeuroAI.
Related papers
- Neural Brain: A Neuroscience-inspired Framework for Embodied Agents [58.58177409853298]
Current AI systems, such as large language models, remain disembodied, unable to physically engage with the world.<n>At the core of this challenge lies the concept of Neural Brain, a central intelligence system designed to drive embodied agents with human-like adaptability.<n>This paper introduces a unified framework for the Neural Brain of embodied agents, addressing two fundamental challenges.
arXiv Detail & Related papers (2025-05-12T15:05:34Z) - Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning [59.35015431695172]
We propose a novel biomimetic continual learning framework that integrates semi-parametric memory and the wake-sleep consolidation mechanism.<n>For the first time, our method enables deep neural networks to retain high performance on novel tasks while maintaining prior knowledge in real-world challenging continual learning scenarios.
arXiv Detail & Related papers (2025-04-20T19:53:13Z) - NeuroChat: A Neuroadaptive AI Chatbot for Customizing Learning Experiences [20.413397262021064]
NeuroChat is a proof-of-concept neuroadaptive AI tutor that integrates real-time EEG-based engagement tracking with generative AI.<n>Results indicate that NeuroChat enhances cognitive and subjective engagement but does not show an immediate effect on learning outcomes.
arXiv Detail & Related papers (2025-03-10T17:57:20Z) - Artificial Intelligence without Restriction Surpassing Human Intelligence with Probability One: Theoretical Insight into Secrets of the Brain with AI Twins of the Brain [20.956507640605093]
One fundamental question which would affect human sustainability remains open: Will artificial intelligence (AI) evolve to surpass human intelligence in the future?<n>This paper shows that in theory new AI twins with fresh cellular level of AI techniques for neuroscience could approximate the brain and its functioning systems.<n>This paper indirectly proves the validity of the conjecture made by Frank Rosenblatt 70 years ago about the potential capabilities of AI.
arXiv Detail & Related papers (2024-12-04T13:17:44Z) - NeuroAI for AI Safety [2.0243003325958253]
Humans are the only known agents capable of general intelligence.<n> Neuroscience may hold important keys to technical AI safety that are currently underexplored and underutilized.<n>We highlight and critically evaluate several paths toward AI safety inspired by neuroscience.
arXiv Detail & Related papers (2024-11-27T17:18:51Z) - Bridging Neuroscience and AI: Environmental Enrichment as a Model for Forward Knowledge Transfer [0.0]
We suggest that environmental enrichment (EE) can be used as a biological model for studying forward transfer.<n>EE refers to animal studies that enhance cognitive, social, motor, and sensory stimulation.<n>We discuss how artificial neural networks (ANNs) can be used to predict neural changes after enriched experiences.
arXiv Detail & Related papers (2024-05-12T14:33:50Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Generative AI-based closed-loop fMRI system [0.41942958779358674]
DecNefGAN is a novel framework that combines a generative adversarial system and a neural reinforcement model.
It can contribute to elucidating how the human brain responds to and counteracts the potential influence of generative AI.
arXiv Detail & Related papers (2024-01-30T04:40:49Z) - World Models and Predictive Coding for Cognitive and Developmental
Robotics: Frontiers and Challenges [51.92834011423463]
We focus on the two concepts of world models and predictive coding.
In neuroscience, predictive coding proposes that the brain continuously predicts its inputs and adapts to model its own dynamics and control behavior in its environment.
arXiv Detail & Related papers (2023-01-14T06:38:14Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Future Trends for Human-AI Collaboration: A Comprehensive Taxonomy of
AI/AGI Using Multiple Intelligences and Learning Styles [95.58955174499371]
We describe various aspects of multiple human intelligences and learning styles, which may impact on a variety of AI problem domains.
Future AI systems will be able not only to communicate with human users and each other, but also to efficiently exchange knowledge and wisdom.
arXiv Detail & Related papers (2020-08-07T21:00:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.