Freezing chaos without synaptic plasticity
- URL: http://arxiv.org/abs/2503.08069v1
- Date: Tue, 11 Mar 2025 06:04:22 GMT
- Title: Freezing chaos without synaptic plasticity
- Authors: Weizhong Huang, Haiping Huang,
- Abstract summary: A strong chaotic fluctuation may be harmful to information processing.<n>Here, we introduce another distinct way without synaptic plasticity.<n>The gradient dynamics are also useful for computational tasks such as recalling or predicting external time-dependent stimuli.
- Score: 2.5690340428649328
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Chaos is ubiquitous in high-dimensional neural dynamics. A strong chaotic fluctuation may be harmful to information processing. A traditional way to mitigate this issue is to introduce Hebbian plasticity, which can stabilize the dynamics. Here, we introduce another distinct way without synaptic plasticity. An Onsager reaction term due to the feedback of the neuron itself is added to the vanilla recurrent dynamics, making the driving force a gradient form. The original unstable fixed points supporting the chaotic fluctuation can then be approached by further decreasing the kinetic energy of the dynamics. We show that this freezing effect also holds in more biologically realistic networks, such as those composed of excitatory and inhibitory neurons. The gradient dynamics are also useful for computational tasks such as recalling or predicting external time-dependent stimuli.
Related papers
- Allostatic Control of Persistent States in Spiking Neural Networks for perception and computation [79.16635054977068]
We introduce a novel model for updating perceptual beliefs about the environment by extending the concept of Allostasis to the control of internal representations.
In this paper, we focus on an application in numerical cognition, where a bump of activity in an attractor network is used as a spatial numerical representation.
arXiv Detail & Related papers (2025-03-20T12:28:08Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
It has long been known in both neuroscience and AI that ''binding'' between neurons leads to a form of competitive learning.<n>We introduce Artificial rethinking together with arbitrary connectivity designs such as fully connected convolutional, or attentive mechanisms.<n>We show that this idea provides performance improvements across a wide spectrum of tasks such as unsupervised object discovery, adversarial robustness, uncertainty, and reasoning.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - An optimization-based equilibrium measure describes non-equilibrium steady state dynamics: application to edge of chaos [2.5690340428649328]
Understanding neural dynamics is a central topic in machine learning, non-linear physics and neuroscience.
The dynamics is non-linear, and particularly non-gradient, i.e., the driving force can not be written as gradient of a potential.
arXiv Detail & Related papers (2024-01-18T14:25:32Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - Smooth Exact Gradient Descent Learning in Spiking Neural Networks [0.0]
We demonstrate exact gradient descent based on continuously changing spiking dynamics.<n>These are generated by neuron models whose spikes vanish and appear at the end of a trial, where it cannot influence subsequent dynamics.
arXiv Detail & Related papers (2023-09-25T20:51:00Z) - Theory of coupled neuronal-synaptic dynamics [3.626013617212667]
In neural circuits, synaptic strengths influence neuronal activity by shaping network dynamics.
We study a recurrent-network model in which neuronal units and synaptic couplings are interacting dynamic variables.
We show that adding Hebbian plasticity slows activity in chaotic networks and can induce chaos.
arXiv Detail & Related papers (2023-02-17T16:42:59Z) - Dynamics with autoregressive neural quantum states: application to
critical quench dynamics [41.94295877935867]
We present an alternative general scheme that enables one to capture long-time dynamics of quantum systems in a stable fashion.
We apply the scheme to time-dependent quench dynamics by investigating the Kibble-Zurek mechanism in the two-dimensional quantum Ising model.
arXiv Detail & Related papers (2022-09-07T15:50:00Z) - Reminiscence of classical chaos in driven transmons [117.851325578242]
We show that even off-resonant drives can cause strong modifications to the structure of the transmon spectrum rendering a large part of it chaotic.
Results lead to a photon number threshold characterizing the appearance of chaos-induced quantum demolition effects.
arXiv Detail & Related papers (2022-07-19T16:04:46Z) - Self-organized criticality in neural networks [0.0]
We show that learning dynamics of neural networks is generically attracted towards a self-organized critical state.
Our results support the claim that the universe might be a neural network.
arXiv Detail & Related papers (2021-07-07T18:00:03Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.