Multi-Scale Temporal Homeostasis Enables Efficient and Robust Neural Networks
- URL: http://arxiv.org/abs/2602.07009v1
- Date: Fri, 30 Jan 2026 13:33:29 GMT
- Title: Multi-Scale Temporal Homeostasis Enables Efficient and Robust Neural Networks
- Authors: MD Azizul Hakim,
- Abstract summary: Multi-Scale Temporal Homeostasis (MSTH) is a framework that integrates ultra-fast (5-ms), fast (2-s), medium (5-min) and slow (1-hrs) regulation into artificial networks.<n>MSTH consistently improves accuracy, eliminates catastrophic failures and enhances recovery from perturbations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Artificial neural networks achieve strong performance on benchmark tasks but remain fundamentally brittle under perturbations, limiting their deployment in real-world settings. In contrast, biological nervous systems sustain reliable function across decades through homeostatic regulation coordinated across multiple temporal scales. Inspired by this principle, this presents Multi-Scale Temporal Homeostasis (MSTH), a biologically grounded framework that integrates ultra-fast (5-ms), fast (2-s), medium (5-min) and slow (1-hrs) regulation into artificial networks. MSTH implements the cross-scale coordination system for artificial neural networks, providing a unified temporal hierarchy that moves beyond superficial biomimicry. The cross-scale coordination enhances computational efficiency through evolutionary-refined optimization mechanisms. Experiments across molecular, graph and image classification benchmarks show that MSTH consistently improves accuracy, eliminates catastrophic failures and enhances recovery from perturbations. Moreover, MSTH outperforms both single-scale bio-inspired models and established state-of-the-art methods, demonstrating generality across diverse domains. These findings establish cross-scale temporal coordination as a core principle for stabilizing artificial neural systems, positioning MSTH as a foundation for building robust, resilient and biologically faithful intelligence.
Related papers
- Scalable Spatio-Temporal SE(3) Diffusion for Long-Horizon Protein Dynamics [51.85385061275941]
Molecular dynamics (MD) simulations remain the gold standard for studying protein dynamics.<n>Recent generative models have shown promise in accelerating simulations, yet they struggle with long-horizon generation.<n>We present STAR-MD, a scalable diffusion model that generates physically plausible protein trajectories over micro-scale timescales.
arXiv Detail & Related papers (2026-02-02T14:13:28Z) - General Self-Prediction Enhancement for Spiking Neurons [71.01912385372577]
Spiking Neural Networks (SNNs) are highly energy-efficient due to event-driven, sparse computation, but their training is challenged by spike non-differentiability and trade-offs among performance, efficiency, and biological plausibility.<n>We propose a self-prediction enhanced spiking neuron method that generates an internal prediction current from its input-output history to modulate membrane potential.<n>This design offers dual advantages, it creates a continuous gradient path that alleviates vanishing gradients and boosts training stability and accuracy, while also aligning with biological principles, which resembles distal dendritic modulation and error-driven synaptic plasticity.
arXiv Detail & Related papers (2026-01-29T15:08:48Z) - Sleep-Based Homeostatic Regularization for Stabilizing Spike-Timing-Dependent Plasticity in Recurrent Spiking Neural Networks [14.487258585834374]
Spike-timing-dependent plasticity (STDP) provides a biologically-plausible learning mechanism for spiking neural networks (SNNs)<n>We propose a neuromorphic regularization scheme inspired by the synaptic homeostasis hypothesis: periodic offline phases during which external inputs are suppressed, synaptic weights undergo decay toward a homeostatic baseline, and spontaneous activity enables memory consolidation.
arXiv Detail & Related papers (2026-01-13T11:17:30Z) - Unleashing Temporal Capacity of Spiking Neural Networks through Spatiotemporal Separation [67.69345363409835]
Spiking Neural Networks (SNNs) are considered naturally suited for temporal processing, with membrane potential propagation widely regarded as the core temporal modeling mechanism.<n>We design Non-Stateful (NS) models progressively removing membrane propagation to its stage-wise role. Experiments reveal a counterintuitive phenomenon: moderate removal in shallow layers improves performance, while excessive removal causes collapse.
arXiv Detail & Related papers (2025-12-05T07:05:53Z) - Fractional neural attention for efficient multiscale sequence processing [0.0]
We introduce Fractional Neural Attention (FNA), a principled framework for multiscale information processing.<n>FNA models token interactions through Lévy diffusion governed by the fractional Laplacian.<n>FNA achieves competitive text-classification performance even with a single layer and a single head.
arXiv Detail & Related papers (2025-11-13T11:27:39Z) - PHASE-Net: Physics-Grounded Harmonic Attention System for Efficient Remote Photoplethysmography Measurement [63.007237197267834]
Existing deep learning methods are mostly physiological monitoring and lack theoretical robustness.<n>We propose a physics-informed r paradigm derived from the Navier-Stokes equations of hemodynamics, showing that the pulse signal follows a second-order system.<n>This provides a theoretical justification for using a Temporal Conal Network (TCN)<n>Phase-Net achieves state-of-the-art performance with strong efficiency, offering a theoretically grounded and deployment-ready r solution.
arXiv Detail & Related papers (2025-09-29T14:36:45Z) - Fractional Spike Differential Equations Neural Network with Efficient Adjoint Parameters Training [63.3991315762955]
Spiking Neural Networks (SNNs) draw inspiration from biological neurons to create realistic models for brain-like computation.<n>Most existing SNNs assume a single time constant for neuronal membrane voltage dynamics, modeled by first-order ordinary differential equations (ODEs) with Markovian characteristics.<n>We propose the Fractional SPIKE Differential Equation neural network (fspikeDE), which captures long-term dependencies in membrane voltage and spike trains through fractional-order dynamics.
arXiv Detail & Related papers (2025-07-22T18:20:56Z) - Langevin Flows for Modeling Neural Latent Dynamics [81.81271685018284]
We introduce LangevinFlow, a sequential Variational Auto-Encoder where the time evolution of latent variables is governed by the underdamped Langevin equation.<n>Our approach incorporates physical priors -- such as inertia, damping, a learned potential function, and forces -- to represent both autonomous and non-autonomous processes in neural systems.<n>Our method outperforms state-of-the-art baselines on synthetic neural populations generated by a Lorenz attractor.
arXiv Detail & Related papers (2025-07-15T17:57:48Z) - Spiking Neural Networks with Temporal Attention-Guided Adaptive Fusion for imbalanced Multi-modal Learning [32.60363000758323]
We propose a temporal attention-guided adaptive fusion framework for multimodal spiking neural networks (SNNs)<n>The proposed framework implements adaptive fusion, especially in the temporal dimension, and alleviates the modality imbalance during multimodal learning.<n>The system resolves temporal misalignment through learnable time-warping operations and faster modality convergence coordination than baseline SNNs.
arXiv Detail & Related papers (2025-05-20T15:55:11Z) - SinBasis Networks: Matrix-Equivalent Feature Extraction for Wave-Like Optical Spectrograms [8.37266944852829]
We propose a unified, matrix-equivalent framework that reinterprets convolution and attention as linear transforms on flattened inputs.<n> Embedding these transforms into CNN, ViT and Capsule architectures yields Sin-Basis Networks with heightened sensitivity to periodic motifs.
arXiv Detail & Related papers (2025-05-06T16:16:42Z) - Learning with Spike Synchrony in Spiking Neural Networks [3.8506283985103447]
Spiking neural networks (SNNs) promise energy-efficient computation by mimicking biological neural dynamics.<n>We introduce spike-synchrony-dependent plasticity (SSDP), a training approach that adjusts synaptic weights based on the degree of neural firing rather than spike timing.
arXiv Detail & Related papers (2025-04-14T04:01:40Z) - Spiking Neural Networks with Consistent Mapping Relations Allow High-Accuracy Inference [9.667807887916132]
Spike-based neuromorphic hardware has demonstrated substantial potential in low energy consumption and efficient inference.
Direct training of deep spiking neural networks is challenging, and conversion-based methods still require substantial time delay owing to unresolved conversion errors.
arXiv Detail & Related papers (2024-06-08T06:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.