Cognition without neurons: modelling anticipation in a basal reservoir computer
- URL: http://arxiv.org/abs/2505.02114v1
- Date: Sun, 04 May 2025 13:53:45 GMT
- Title: Cognition without neurons: modelling anticipation in a basal reservoir computer
- Authors: Polyphony Bruna, Linnéa Gyllingberg,
- Abstract summary: We present a minimal, biologically inspired reservoir model that demonstrates simple temporal anticipation without neurons, spikes, or trained readouts.<n>Results show that simple homeodynamic regulation can support unsupervised prediction, suggesting a pathway to memory and anticipation in basal organisms.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How do non-neural organisms, such as the slime mould \textit{Physarum polycephalum}, anticipate periodic events in their environment? We present a minimal, biologically inspired reservoir model that demonstrates simple temporal anticipation without neurons, spikes, or trained readouts. The model consists of a spatially embedded hexagonal network in which nodes regulate their energy through local, allostatic adaptation. Input perturbations shape energy dynamics over time, allowing the system to internalize temporal regularities into its structure. After being exposed to a periodic input signal, the model spontaneously re-enacts those dynamics even in the absence of further input -- a form of unsupervised temporal pattern completion. This behaviour emerges from internal homeodynamic regulation, without supervised learning or symbolic processing. Our results show that simple homeodynamic regulation can support unsupervised prediction, suggesting a pathway to memory and anticipation in basal organisms.
Related papers
- Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Enhancing Revivals Via Projective Measurements in a Quantum Scarred System [51.3422222472898]
Quantum many-body scarred systems exhibit atypical dynamical behavior, evading thermalization and featuring periodic state revivals.<n>We investigate the impact of projective measurements on the dynamics in the scar subspace for the paradigmatic PXP model.<n>We identify a measurement-induced phase resynchronization, countering the natural dephasing of quantum scars, as the key mechanism underlying this phenomenon.
arXiv Detail & Related papers (2025-03-28T17:03:14Z) - Meta-Representational Predictive Coding: Biomimetic Self-Supervised Learning [51.22185316175418]
We present a new form of predictive coding that we call meta-representational predictive coding (MPC)<n>MPC sidesteps the need for learning a generative model of sensory input by learning to predict representations of sensory input across parallel streams.
arXiv Detail & Related papers (2025-03-22T22:13:14Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.<n>In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Confidence Regulation Neurons in Language Models [91.90337752432075]
This study investigates the mechanisms by which large language models represent and regulate uncertainty in next-token predictions.
Entropy neurons are characterized by an unusually high weight norm and influence the final layer normalization (LayerNorm) scale to effectively scale down the logits.
token frequency neurons, which we describe here for the first time, boost or suppress each token's logit proportionally to its log frequency, thereby shifting the output distribution towards or away from the unigram distribution.
arXiv Detail & Related papers (2024-06-24T01:31:03Z) - Persistent learning signals and working memory without continuous
attractors [6.135577623169029]
We show that quasi-periodic attractors can support learning arbitrarily long temporal relationships.
Our theory has broad implications for the design of artificial learning systems.
arXiv Detail & Related papers (2023-08-24T06:12:41Z) - Latent Equilibrium: A unified learning theory for arbitrarily fast
computation with arbitrarily slow neurons [0.7340017786387767]
We introduce Latent Equilibrium, a new framework for inference and learning in networks of slow components.
We derive disentangled neuron and synapse dynamics from a prospective energy function.
We show how our principle can be applied to detailed models of cortical microcircuitry.
arXiv Detail & Related papers (2021-10-27T16:15:55Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The principles of adaptation in organisms and machines II:
Thermodynamics of the Bayesian brain [0.0]
The article reviews how organisms learn and recognize the world through the dynamics of neural networks from the perspective of Bayesian inference.
We then introduce a thermodynamic view on this process based on the laws for the entropy of neural activity.
arXiv Detail & Related papers (2020-06-23T16:57:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.