On Computational Modeling of Sleep-Wake Cycle
- URL: http://arxiv.org/abs/2404.05484v2
- Date: Fri, 17 May 2024 17:18:28 GMT
- Title: On Computational Modeling of Sleep-Wake Cycle
- Authors: Xin Li,
- Abstract summary: Neuroscience treats sleep and wake as default and perturbation modes of the brain.
It is hypothesized that the brain self-organizes neural activities without environmental inputs.
This paper presents a new computational model of the sleep-wake cycle for learning and memory.
- Score: 5.234742752529437
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Why do mammals need to sleep? Neuroscience treats sleep and wake as default and perturbation modes of the brain. It is hypothesized that the brain self-organizes neural activities without environmental inputs. This paper presents a new computational model of the sleep-wake cycle (SWC) for learning and memory. During the sleep mode, the memory consolidation by the thalamocortical system is abstracted by a disentangling operator that maps context-dependent representations (CDR) to context-independent representations (CIR) for generalization. Such a disentangling operator can be mathematically formalized by an integral transform that integrates the context variable from CDR. During the wake mode, the memory formation by the hippocampal-neocortical system is abstracted by an entangling operator from CIR to CDR where the context is introduced by physical motion. When designed as inductive bias, entangled CDR linearizes the problem of unsupervised learning for sensory memory by direct-fit. The concatenation of disentangling and entangling operators forms a disentangling-entangling cycle (DEC) as the building block for sensorimotor learning. We also discuss the relationship of DEC and SWC to the perception-action cycle (PAC) for internal model learning and perceptual control theory for the ecological origin of natural languages.
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - An Overlooked Role of Context-Sensitive Dendrites [2.225268436173329]
We show that context-sensitive (CS)-TPNs flexibly integrate C moment-by-moment with the FF somatic current at the soma.
This enables the propagation of more coherent signals (bursts), making learning faster with fewer neurons.
arXiv Detail & Related papers (2024-08-20T17:18:54Z) - CREIMBO: Cross-Regional Ensemble Interactions in Multi-view Brain Observations [3.3713037259290255]
Current analysis methods often fail to harness the richness of such data.
CREIMBO identifies the hidden composition of per-session neural ensembles through graph-driven dictionary learning.
We demonstrate CREIMBO's ability to recover true components in synthetic data.
arXiv Detail & Related papers (2024-05-27T17:48:32Z) - Interpretable Spatio-Temporal Embedding for Brain Structural-Effective Network with Ordinary Differential Equation [56.34634121544929]
In this study, we first construct the brain-effective network via the dynamic causal model.
We then introduce an interpretable graph learning framework termed Spatio-Temporal Embedding ODE (STE-ODE)
This framework incorporates specifically designed directed node embedding layers, aiming at capturing the dynamic interplay between structural and effective networks.
arXiv Detail & Related papers (2024-05-21T20:37:07Z) - Inferring Inference [7.11780383076327]
We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
arXiv Detail & Related papers (2023-10-04T22:12:11Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Cross-Frequency Coupling Increases Memory Capacity in Oscillatory Neural
Networks [69.42260428921436]
Cross-frequency coupling (CFC) is associated with information integration across populations of neurons.
We construct a model of CFC which predicts a computational role for observed $theta - gamma$ oscillatory circuits in the hippocampus and cortex.
We show that the presence of CFC increases the memory capacity of a population of neurons connected by plastic synapses.
arXiv Detail & Related papers (2022-04-05T17:13:36Z) - A probabilistic latent variable model for detecting structure in binary
data [0.6767885381740952]
We introduce a novel, probabilistic binary latent variable model to detect noisy or approximate repeats of patterns in sparse binary data.
The model's capability is demonstrated by extracting structure in recordings from retinal neurons.
We apply our model to spiking responses recorded in retinal ganglion cells during stimulation with a movie.
arXiv Detail & Related papers (2022-01-26T18:37:35Z) - Dynamic Inference with Neural Interpreters [72.90231306252007]
We present Neural Interpreters, an architecture that factorizes inference in a self-attention network as a system of modules.
inputs to the model are routed through a sequence of functions in a way that is end-to-end learned.
We show that Neural Interpreters perform on par with the vision transformer using fewer parameters, while being transferrable to a new task in a sample efficient manner.
arXiv Detail & Related papers (2021-10-12T23:22:45Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.