Heterogeneous Time Constants Improve Stability in Equilibrium Propagation
- URL: http://arxiv.org/abs/2603.03402v1
- Date: Tue, 03 Mar 2026 14:35:38 GMT
- Title: Heterogeneous Time Constants Improve Stability in Equilibrium Propagation
- Authors: Yoshimasa Kubo, Suhani Pragnesh Modi, Smit Patel,
- Abstract summary: We introduce heterogeneous time steps (HTS) for equilibrium propagation.<n>We show that HTS improves training stability while maintaining competitive task performance.<n>These results suggest that incorporating heterogeneous temporal dynamics enhances both the biological realism and robustness of equilibrium propagation.
- Score: 0.669087470775851
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Equilibrium propagation (EP) is a biologically plausible alternative to backpropagation for training neural networks. However, existing EP models use a uniform scalar time step dt, which corresponds biologically to a membrane time constant that is heterogeneous across neurons. Here, we introduce heterogeneous time steps (HTS) for EP by assigning neuron-specific time constants drawn from biologically motivated distributions. We show that HTS improves training stability while maintaining competitive task performance. These results suggest that incorporating heterogeneous temporal dynamics enhances both the biological realism and robustness of equilibrium propagation.
Related papers
- Multi-Scale Temporal Homeostasis Enables Efficient and Robust Neural Networks [0.0]
Multi-Scale Temporal Homeostasis (MSTH) is a framework that integrates ultra-fast (5-ms), fast (2-s), medium (5-min) and slow (1-hrs) regulation into artificial networks.<n>MSTH consistently improves accuracy, eliminates catastrophic failures and enhances recovery from perturbations.
arXiv Detail & Related papers (2026-01-30T13:33:29Z) - How Information Evolves: Stability-Driven Assembly and the Emergence of a Natural Genetic Algorithm [1.2691047660244335]
We present StabilityDriven Assembly (SDA), a framework in which hallmark assembly combined with persistence biases populations toward longer-lived motifs.<n>We apply SDA/GA to chemical symbol space using SMILES fragments with recombination, mutation, and a stability function.<n>Results motivate an evolutionary ladder hypothesis where persistence-driven selection precedes genetic replication.
arXiv Detail & Related papers (2026-01-22T15:47:48Z) - Sleep-Based Homeostatic Regularization for Stabilizing Spike-Timing-Dependent Plasticity in Recurrent Spiking Neural Networks [14.487258585834374]
Spike-timing-dependent plasticity (STDP) provides a biologically-plausible learning mechanism for spiking neural networks (SNNs)<n>We propose a neuromorphic regularization scheme inspired by the synaptic homeostasis hypothesis: periodic offline phases during which external inputs are suppressed, synaptic weights undergo decay toward a homeostatic baseline, and spontaneous activity enables memory consolidation.
arXiv Detail & Related papers (2026-01-13T11:17:30Z) - Unleashing Temporal Capacity of Spiking Neural Networks through Spatiotemporal Separation [67.69345363409835]
Spiking Neural Networks (SNNs) are considered naturally suited for temporal processing, with membrane potential propagation widely regarded as the core temporal modeling mechanism.<n>We design Non-Stateful (NS) models progressively removing membrane propagation to its stage-wise role. Experiments reveal a counterintuitive phenomenon: moderate removal in shallow layers improves performance, while excessive removal causes collapse.
arXiv Detail & Related papers (2025-12-05T07:05:53Z) - HetSyn: Versatile Timescale Integration in Spiking Neural Networks via Heterogeneous Synapses [3.744763853474646]
Spiking Neural Networks (SNNs) offer a biologically plausible and energy-efficient framework for temporal information processing.<n>We introduce HetSyn, a framework that models synaptic heterogeneity with synapse-specific time constants.<n>We demonstrate that HetSynLIF improves the performance of SNNs across a variety of tasks.
arXiv Detail & Related papers (2025-08-01T10:19:56Z) - Consistent Sampling and Simulation: Molecular Dynamics with Energy-Based Diffusion Models [50.77646970127369]
We propose an energy-based diffusion model with a Fokker--Planck-derived regularization term to enforce consistency.<n>We demonstrate our approach by sampling and simulating multiple biomolecular systems, including fast-folding proteins.
arXiv Detail & Related papers (2025-06-20T16:38:29Z) - Backpropagation through space, time, and the brain [2.10686639478348]
We introduce General Latent Equilibrium, a computational framework for fully local-temporal credit assignment in physical, dynamical networks of neurons.<n>In particular, GLE exploits the morphology of dendritic trees to enable more complex information storage and processing in single neurons.
arXiv Detail & Related papers (2024-03-25T16:57:02Z) - Continuous Time Continuous Space Homeostatic Reinforcement Learning
(CTCS-HRRL) : Towards Biological Self-Autonomous Agent [0.12068041242343093]
Homeostasis is a process by which living beings maintain their internal balance.
Homeostatic Regulated Reinforcement Learning (HRRL) framework attempts to explain this learned homeostatic behaviour.
In this work, we advance the HRRL framework to a continuous time-space environment and validate the CTCS-HRRL framework.
arXiv Detail & Related papers (2024-01-17T06:29:34Z) - Unbalanced Diffusion Schr\"odinger Bridge [71.31485908125435]
We introduce unbalanced DSBs which model the temporal evolution of marginals with arbitrary finite mass.
This is achieved by deriving the time reversal of differential equations with killing and birth terms.
We present two novel algorithmic schemes that comprise a scalable objective function for training unbalanced DSBs.
arXiv Detail & Related papers (2023-06-15T12:51:56Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Equilibrium Propagation with Continual Weight Updates [69.87491240509485]
We propose a learning algorithm that bridges Machine Learning and Neuroscience, by computing gradients closely matching those of Backpropagation Through Time (BPTT)
We prove theoretically that, provided the learning rates are sufficiently small, at each time step of the second phase the dynamics of neurons and synapses follow the gradients of the loss given by BPTT.
These results bring EP a step closer to biology by better complying with hardware constraints while maintaining its intimate link with backpropagation.
arXiv Detail & Related papers (2020-04-29T14:54:30Z) - Continual Weight Updates and Convolutional Architectures for Equilibrium
Propagation [69.87491240509485]
Equilibrium Propagation (EP) is a biologically inspired alternative algorithm to backpropagation (BP) for training neural networks.
We propose a discrete-time formulation of EP which enables to simplify equations, speed up training and extend EP to CNNs.
Our CNN model achieves the best performance ever reported on MNIST with EP.
arXiv Detail & Related papers (2020-04-29T12:14:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.