A Unified Cortical Circuit Model with Divisive Normalization and Self-Excitation for Robust Representation and Memory Maintenance
- URL: http://arxiv.org/abs/2508.12702v1
- Date: Mon, 18 Aug 2025 08:00:24 GMT
- Title: A Unified Cortical Circuit Model with Divisive Normalization and Self-Excitation for Robust Representation and Memory Maintenance
- Authors: Jie Su, Weiwei Wang, Zhaotian Gu, Dahui Wang, Tianyi Qian,
- Abstract summary: We introduce a recurrent neural circuit that combines divisive normalization with self-excitation to achieve robust encoding.<n>We demonstrate the model's versatility in two canonical tasks.<n>This work establishes a unified mathematical framework that bridges noise suppression, working memory, and approximate Bayesian inference.
- Score: 2.705743343600661
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Robust information representation and its persistent maintenance are fundamental for higher cognitive functions. Existing models employ distinct neural mechanisms to separately address noise-resistant processing or information maintenance, yet a unified framework integrating both operations remains elusive -- a critical gap in understanding cortical computation. Here, we introduce a recurrent neural circuit that combines divisive normalization with self-excitation to achieve both robust encoding and stable retention of normalized inputs. Mathematical analysis shows that, for suitable parameter regimes, the system forms a continuous attractor with two key properties: (1) input-proportional stabilization during stimulus presentation; and (2) self-sustained memory states persisting after stimulus offset. We demonstrate the model's versatility in two canonical tasks: (a) noise-robust encoding in a random-dot kinematogram (RDK) paradigm; and (b) approximate Bayesian belief updating in a probabilistic Wisconsin Card Sorting Test (pWCST). This work establishes a unified mathematical framework that bridges noise suppression, working memory, and approximate Bayesian inference within a single cortical microcircuit, offering fresh insights into the brain's canonical computation and guiding the design of biologically plausible artificial neural architectures.
Related papers
- Distilling Formal Logic into Neural Spaces: A Kernel Alignment Approach for Signal Temporal Logic [6.419602857618508]
We introduce a framework for learning continuous neural representations of formal specifications.<n>We distill a symbolic robustness kernel into a Transformer encoder.<n>The encoder produces embeddings in a single forward pass, effectively mimicking the kernel's logic at a fraction of its computational cost.
arXiv Detail & Related papers (2026-03-05T14:08:25Z) - Knob: A Physics-Inspired Gating Interface for Interpretable and Controllable Neural Dynamics [7.965536008626047]
Knob is a framework that connects deep learning with classical control theory.<n>Our framework allows operators to tune "stability" and "sensitivity" through familiar physical analogues.
arXiv Detail & Related papers (2026-02-26T07:25:22Z) - On the Mechanism and Dynamics of Modular Addition: Fourier Features, Lottery Ticket, and Grokking [49.1352577985191]
We present a comprehensive analysis of how two-layer neural networks learn features to solve the modular addition task.<n>Our work provides a full mechanistic interpretation of the learned model and a theoretical explanation of its training dynamics.
arXiv Detail & Related papers (2026-02-18T20:25:13Z) - Noise & pattern: identity-anchored Tikhonov regularization for robust structural anomaly detection [58.535473924035365]
Anomaly detection plays a pivotal role in automated industrial inspection, aiming to identify subtle or rare defects in otherwise uniform visual patterns.<n>We tackle structural anomaly detection using a self-supervised autoencoder that learns to repair corrupted inputs.<n>We introduce a corruption model that injects artificial disruptions into training images to mimic structural defects.
arXiv Detail & Related papers (2025-11-10T15:48:50Z) - A Neuroscience-Inspired Dual-Process Model of Compositional Generalization [12.494200165412186]
We propose textscMirage, a neuro-inspired dual-process model.<n>It combines a fast, intuitive System1'' (a meta-trained Transformer) with a deliberate, rule-based System2'' (a Engine)<n>Mirage achieves $>$99% accuracy on all splits of the SCAN benchmark in a task-agnostic setting.
arXiv Detail & Related papers (2025-07-25T01:02:07Z) - Cycle-Consistent Helmholtz Machine: Goal-Seeded Simulation via Inverted Inference [5.234742752529437]
We introduce the emphCycle-Consistent Helmholtz Machine (C$2$HM)<n>C$2$HM reframes inference as a emphgoal-seeded, emphasymmetric process grounded in structured internal priors.<n>By offering a biologically inspired alternative to classical amortized inference, $C2$HM reconceives generative modeling as intentional simulation.
arXiv Detail & Related papers (2025-07-03T17:24:27Z) - Feature Integration Spaces: Joint Training Reveals Dual Encoding in Neural Network Representations [0.0]
Current autoencoder (SAE) approaches to neural network interpretability assume that activations can be decomposed through linear superposition into sparse, interpretable features.<n>We propose that neural networks encode information in two complementary spaces compressed into the same substrate: feature identity and feature integration.<n>Joint training achieves 41.3% reconstruction improvement and 51.6% reduction in KL divergence errors.
arXiv Detail & Related papers (2025-06-30T21:26:58Z) - Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.<n>In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - A Novel Framework for Learning Stochastic Representations for Sequence Generation and Recognition [0.0]
The ability to generate and recognize sequential data is fundamental for autonomous systems operating in dynamic environments.<n>We propose a novel Recurrent Network with Parametric Biases (RNNPB)<n>Our approach provides a framework for modeling temporal patterns and advances the development of robust and systems in artificial intelligence and robotics.
arXiv Detail & Related papers (2024-12-30T07:27:50Z) - Discrete, compositional, and symbolic representations through attractor dynamics [51.20712945239422]
We introduce a novel neural systems model that integrates attractor dynamics with symbolic representations to model cognitive processes akin to the probabilistic language of thought (PLoT)
Our model segments the continuous representational space into discrete basins, with attractor states corresponding to symbolic sequences, that reflect the semanticity and compositionality characteristic of symbolic systems through unsupervised learning, rather than relying on pre-defined primitives.
This approach establishes a unified framework that integrates both symbolic and sub-symbolic processing through neural dynamics, a neuroplausible substrate with proven expressivity in AI, offering a more comprehensive model that mirrors the complex duality of cognitive operations
arXiv Detail & Related papers (2023-10-03T05:40:56Z) - Hybrid Predictive Coding: Inferring, Fast and Slow [62.997667081978825]
We propose a hybrid predictive coding network that combines both iterative and amortized inference in a principled manner.
We demonstrate that our model is inherently sensitive to its uncertainty and adaptively balances balances to obtain accurate beliefs using minimum computational expense.
arXiv Detail & Related papers (2022-04-05T12:52:45Z) - Any-to-Many Voice Conversion with Location-Relative Sequence-to-Sequence
Modeling [61.351967629600594]
This paper proposes an any-to-many location-relative, sequence-to-sequence (seq2seq), non-parallel voice conversion approach.
In this approach, we combine a bottle-neck feature extractor (BNE) with a seq2seq synthesis module.
Objective and subjective evaluations show that the proposed any-to-many approach has superior voice conversion performance in terms of both naturalness and speaker similarity.
arXiv Detail & Related papers (2020-09-06T13:01:06Z) - Learning Noise-Aware Encoder-Decoder from Noisy Labels by Alternating
Back-Propagation for Saliency Detection [54.98042023365694]
We propose a noise-aware encoder-decoder framework to disentangle a clean saliency predictor from noisy training examples.
The proposed model consists of two sub-models parameterized by neural networks.
arXiv Detail & Related papers (2020-07-23T18:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.