Data-Efficient Neural Training with Dynamic Connectomes
- URL: http://arxiv.org/abs/2508.06817v1
- Date: Sat, 09 Aug 2025 04:32:23 GMT
- Title: Data-Efficient Neural Training with Dynamic Connectomes
- Authors: Yutong Wu, Peilin He, Tananun Songdechakraiwut,
- Abstract summary: We introduce a novel approach to characterize training dynamics in neural networks by representing evolving neural activations as functional connectomes.<n>Our results show that these signatures effectively capture key transitions in the functional organization of the network.
- Score: 1.2260914111581283
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The study of dynamic functional connectomes has provided valuable insights into how patterns of brain activity change over time. Neural networks process information through artificial neurons, conceptually inspired by patterns of activation in the brain. However, their hierarchical structure and high-dimensional parameter space pose challenges for understanding and controlling training dynamics. In this study, we introduce a novel approach to characterize training dynamics in neural networks by representing evolving neural activations as functional connectomes and extracting dynamic signatures of activity throughout training. Our results show that these signatures effectively capture key transitions in the functional organization of the network. Building on this analysis, we propose the use of a time series of functional connectomes as an intrinsic indicator of learning progress, enabling a principled early stopping criterion. Our framework performs robustly across benchmarks and provides new insights into neural network training dynamics.
Related papers
- Graph-Based Representation Learning of Neuronal Dynamics and Behavior [2.3859858429583665]
We introduce the Temporal Attention-enhanced Variational Graph Recurrent Neural Network (TAVRNN), a novel framework that models time-varying neuronal connectivity.<n>TAVRNN learns latent dynamics at the single-unit level while maintaining interpretable population-level representations.<n>We validate TAVRNN on three diverse datasets: (1) electrophysiological data from a freely behaving rat, (2) primate somatosensory cortex recordings during a reaching task, and (3) biological neurons in the DishBrain platform interacting with a virtual game environment.
arXiv Detail & Related papers (2024-10-01T13:19:51Z) - From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.<n>Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.<n>These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - DSAM: A Deep Learning Framework for Analyzing Temporal and Spatial Dynamics in Brain Networks [4.041732967881764]
Most rs-fMRI studies compute a single static functional connectivity matrix across brain regions of interest.
These approaches are at risk of oversimplifying brain dynamics and lack proper consideration of the goal at hand.
We propose a novel interpretable deep learning framework that learns goal-specific functional connectivity matrix directly from time series.
arXiv Detail & Related papers (2024-05-19T23:35:06Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Understanding Activation Patterns in Artificial Neural Networks by
Exploring Stochastic Processes [0.0]
We propose utilizing the framework of processes, which has been underutilized thus far.
We focus solely on activation frequency, leveraging neuroscience techniques used for real neuron spike trains.
We derive parameters describing activation patterns in each network, revealing consistent differences across architectures and training sets.
arXiv Detail & Related papers (2023-08-01T22:12:30Z) - Learning low-dimensional dynamics from whole-brain data improves task
capture [2.82277518679026]
We introduce a novel approach to learning low-dimensional approximations of neural dynamics by using a sequential variational autoencoder (SVAE)
Our method finds smooth dynamics that can predict cognitive processes with accuracy higher than classical methods.
We evaluate our approach on various task-fMRI datasets, including motor, working memory, and relational processing tasks.
arXiv Detail & Related papers (2023-05-18T18:43:13Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.