Connectome-Guided Automatic Learning Rates for Deep Networks
- URL: http://arxiv.org/abs/2510.23781v1
- Date: Mon, 27 Oct 2025 19:11:49 GMT
- Title: Connectome-Guided Automatic Learning Rates for Deep Networks
- Authors: Peilin He, Tananun Songdechakraiwut,
- Abstract summary: We propose Connectome-Guided Automatic Learning Rate (CG-ALR) that adjusts learning rates online as this connectome reconfigures.<n>This connectomics-inspired mechanism adapts step sizes to the network's dynamic functional organization, slowing learning during unstable reconfiguration and accelerating it when stable organization emerges.<n>Our results demonstrate that principles inspired by brain connectomes can inform the design of adaptive learning rates in deep learning, generally outperforming traditional SGD-based schedules and recent methods.
- Score: 3.2228025627337864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The human brain is highly adaptive: its functional connectivity reconfigures on multiple timescales during cognition and learning, enabling flexible information processing. By contrast, artificial neural networks typically rely on manually-tuned learning-rate schedules or generic adaptive optimizers whose hyperparameters remain largely agnostic to a model's internal dynamics. In this paper, we propose Connectome-Guided Automatic Learning Rate (CG-ALR) that dynamically constructs a functional connectome of the neural network from neuron co-activations at each training iteration and adjusts learning rates online as this connectome reconfigures. This connectomics-inspired mechanism adapts step sizes to the network's dynamic functional organization, slowing learning during unstable reconfiguration and accelerating it when stable organization emerges. Our results demonstrate that principles inspired by brain connectomes can inform the design of adaptive learning rates in deep learning, generally outperforming traditional SGD-based schedules and recent methods.
Related papers
- Data-Efficient Neural Training with Dynamic Connectomes [1.2260914111581283]
We introduce a novel approach to characterize training dynamics in neural networks by representing evolving neural activations as functional connectomes.<n>Our results show that these signatures effectively capture key transitions in the functional organization of the network.
arXiv Detail & Related papers (2025-08-09T04:32:23Z) - Improving Neural Network Training using Dynamic Learning Rate Schedule for PINNs and Image Classification [0.0]
This paper presents a dynamic learning rate scheduler (DLRS) algorithm that adapts the learning rate based on the loss values calculated during the training process.<n> Experiments are conducted on problems related to physics-informed neural networks (PINNs) and image classification using multilayer perceptrons and convolutional neural networks, respectively.
arXiv Detail & Related papers (2025-07-29T12:31:21Z) - Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptive learning in neural networks [0.0]
The brain rapidly adapts to new contexts and learns from limited data, a coveted characteristic that artificial intelligence (AI) algorithms struggle to mimic.<n>We developed a learning paradigm utilizing link strength oscillations, where learning is associated with the coordination of these oscillations.<n>Link oscillations can rapidly change coordination, allowing the network to sense and adapt to subtle contextual changes without supervision.
arXiv Detail & Related papers (2025-02-12T18:58:34Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Learning Fast and Slow for Online Time Series Forecasting [76.50127663309604]
Fast and Slow learning Networks (FSNet) is a holistic framework for online time-series forecasting.
FSNet balances fast adaptation to recent changes and retrieving similar old knowledge.
Our code will be made publicly available.
arXiv Detail & Related papers (2022-02-23T18:23:07Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Sparse Meta Networks for Sequential Adaptation and its Application to
Adaptive Language Modelling [7.859988850911321]
We introduce Sparse Meta Networks -- a meta-learning approach to learn online sequential adaptation algorithms for deep neural networks.
We augment a deep neural network with a layer-specific fast-weight memory.
We demonstrate strong performance on a variety of sequential adaptation scenarios.
arXiv Detail & Related papers (2020-09-03T17:06:52Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Adaptive Reinforcement Learning through Evolving Self-Modifying Neural
Networks [0.0]
Current methods in Reinforcement Learning (RL) only adjust to new interactions after reflection over a specified time interval.
Recent work addressing this by endowing artificial neural networks with neuromodulated plasticity have been shown to improve performance on simple RL tasks trained using backpropagation.
Here we study the problem of meta-learning in a challenging quadruped domain, where each leg of the quadruped has a chance of becoming unusable.
Results demonstrate that agents evolved using self-modifying plastic networks are more capable of adapting to complex meta-learning learning tasks, even outperforming the same network updated using gradient
arXiv Detail & Related papers (2020-05-22T02:24:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.