Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptive learning in neural networks
- URL: http://arxiv.org/abs/2502.08644v4
- Date: Thu, 06 Mar 2025 03:09:16 GMT
- Title: Rhythmic sharing: A bio-inspired paradigm for zero-shot adaptive learning in neural networks
- Authors: Hoony Kang, Wolfgang Losert,
- Abstract summary: The brain rapidly adapts to new contexts and learns from limited data, a coveted characteristic that artificial intelligence (AI) algorithms struggle to mimic.<n>We developed a learning paradigm utilizing link strength oscillations, where learning is associated with the coordination of these oscillations.<n>Link oscillations can rapidly change coordination, allowing the network to sense and adapt to subtle contextual changes without supervision.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The brain rapidly adapts to new contexts and learns from limited data, a coveted characteristic that artificial intelligence (AI) algorithms struggle to mimic. Inspired by the mechanical oscillatory rhythms of neural cells, we developed a learning paradigm utilizing link strength oscillations, where learning is associated with the coordination of these oscillations. Link oscillations can rapidly change coordination, allowing the network to sense and adapt to subtle contextual changes without supervision. The network becomes a generalist AI architecture, capable of predicting dynamics of multiple contexts including unseen ones. These results make our paradigm a powerful starting point for novel models of cognition. Because our paradigm is agnostic to specifics of the neural network, our study opens doors for introducing rapid adaptive learning into leading AI models.
Related papers
- Discovering Chunks in Neural Embeddings for Interpretability [53.80157905839065]
We propose leveraging the principle of chunking to interpret artificial neural population activities.<n>We first demonstrate this concept in recurrent neural networks (RNNs) trained on artificial sequences with imposed regularities.<n>We identify similar recurring embedding states corresponding to concepts in the input, with perturbations to these states activating or inhibiting the associated concepts.
arXiv Detail & Related papers (2025-02-03T20:30:46Z) - Peer-to-Peer Learning Dynamics of Wide Neural Networks [10.179711440042123]
We provide an explicit characterization of the learning dynamics of wide neural networks trained using distributed gradient descent (DGD) algorithms.
Our results leverage both recent in neural tangent kernel (NTK) theory and extensive previous work on distributed learning and consensus tasks.
arXiv Detail & Related papers (2024-09-23T17:57:58Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Dynamical stability and chaos in artificial neural network trajectories along training [3.379574469735166]
We study the dynamical properties of this process by analyzing through this lens the network trajectories of a shallow neural network.
We find hints of regular and chaotic behavior depending on the learning rate regime.
This work also contributes to the cross-fertilization of ideas between dynamical systems theory, network theory and machine learning.
arXiv Detail & Related papers (2024-04-08T17:33:11Z) - Reasoning Algorithmically in Graph Neural Networks [1.8130068086063336]
We aim to integrate the structured and rule-based reasoning of algorithms with adaptive learning capabilities of neural networks.
This dissertation provides theoretical and practical contributions to this area of research.
arXiv Detail & Related papers (2024-02-21T12:16:51Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Meta Neural Coordination [0.0]
Meta-learning aims to develop algorithms that can learn from other learning algorithms to adapt to new and changing environments.
Uncertainty in the predictions of conventional deep neural networks highlights the partial predictability of the world.
We discuss the potential advancements required to build biologically-inspired machine intelligence.
arXiv Detail & Related papers (2023-05-20T06:06:44Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Abrupt and spontaneous strategy switches emerge in simple regularised
neural networks [8.737068885923348]
We study whether insight-like behaviour can occur in simple artificial neural networks.
Analyses of network architectures and learning dynamics revealed that insight-like behaviour crucially depended on a regularised gating mechanism.
This suggests that insight-like behaviour can arise naturally from gradual learning in simple neural networks.
arXiv Detail & Related papers (2023-02-22T12:48:45Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Backprop-Free Reinforcement Learning with Active Neural Generative
Coding [84.11376568625353]
We propose a computational framework for learning action-driven generative models without backpropagation of errors (backprop) in dynamic environments.
We develop an intelligent agent that operates even with sparse rewards, drawing inspiration from the cognitive theory of planning as inference.
The robust performance of our agent offers promising evidence that a backprop-free approach for neural inference and learning can drive goal-directed behavior.
arXiv Detail & Related papers (2021-07-10T19:02:27Z) - A brain basis of dynamical intelligence for AI and computational
neuroscience [0.0]
More brain-like capacities may demand new theories, models, and methods for designing artificial learning systems.
This article was inspired by our symposium on dynamical neuroscience and machine learning at the 6th Annual US/NIH BRAIN Initiative Investigators Meeting.
arXiv Detail & Related papers (2021-05-15T19:49:32Z) - Learning Contact Dynamics using Physically Structured Neural Networks [81.73947303886753]
We use connections between deep neural networks and differential equations to design a family of deep network architectures for representing contact dynamics between objects.
We show that these networks can learn discontinuous contact events in a data-efficient manner from noisy observations.
Our results indicate that an idealised form of touch feedback is a key component of making this learning problem tractable.
arXiv Detail & Related papers (2021-02-22T17:33:51Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Deep learning of contagion dynamics on complex networks [0.0]
We propose a complementary approach based on deep learning to build effective models of contagion dynamics on networks.
By allowing simulations on arbitrary network structures, our approach makes it possible to explore the properties of the learned dynamics beyond the training data.
Our results demonstrate how deep learning offers a new and complementary perspective to build effective models of contagion dynamics on networks.
arXiv Detail & Related papers (2020-06-09T17:18:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.