Flexible Phase Dynamics for Bio-Plausible Contrastive Learning
- URL: http://arxiv.org/abs/2302.12431v2
- Date: Wed, 30 Aug 2023 23:13:13 GMT
- Title: Flexible Phase Dynamics for Bio-Plausible Contrastive Learning
- Authors: Ezekiel Williams, Colin Bredenberg, Guillaume Lajoie
- Abstract summary: Contrastive Learning (CL) algorithms are traditionally implemented with rigid, temporally non-local, and periodic learning dynamics.
This study builds on work exploring how CL might be implemented by biological or neurmorphic systems.
We show that this form of learning can be made temporally local, and can still function even if many of the dynamical requirements of standard training procedures are relaxed.
- Score: 11.179324762680297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many learning algorithms used as normative models in neuroscience or as
candidate approaches for learning on neuromorphic chips learn by contrasting
one set of network states with another. These Contrastive Learning (CL)
algorithms are traditionally implemented with rigid, temporally non-local, and
periodic learning dynamics that could limit the range of physical systems
capable of harnessing CL. In this study, we build on recent work exploring how
CL might be implemented by biological or neurmorphic systems and show that this
form of learning can be made temporally local, and can still function even if
many of the dynamical requirements of standard training procedures are relaxed.
Thanks to a set of general theorems corroborated by numerical experiments
across several CL models, our results provide theoretical foundations for the
study and development of CL methods for biological and neuromorphic neural
networks.
Related papers
- Dendritic Localized Learning: Toward Biologically Plausible Algorithm [41.362676232853765]
Backpropagation is challenged due to weight symmetry, reliance on global error signals, and dual-phase nature of training.
Inspired by the dynamics and plasticity of pyramidal neurons, we propose Dendritic Localized Learning (DLL), a novel learning algorithm designed to overcome these challenges.
Extensive empirical experiments demonstrateDLL satisfies all three criteria of biological plausibility algorithm while achieving state-of-the-art performance.
arXiv Detail & Related papers (2025-01-17T06:35:20Z) - Continual Learning with Neuromorphic Computing: Foundations, Methods, and Emerging Applications [5.213243471774097]
Neuromorphic Continual Learning (NCL) appears as an emerging solution, by leveraging the principles of Spiking Neural Networks (SNNs)<n>This survey covers several hybrid approaches that combine supervised and unsupervised learning paradigms.<n>It also covers optimization techniques including SNN operations reduction, weight quantization, and knowledge distillation.
arXiv Detail & Related papers (2024-10-11T19:49:53Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - NeuralCRNs: A Natural Implementation of Learning in Chemical Reaction Networks [0.0]
We present a novel supervised learning framework constructed as a collection of deterministic chemical reaction networks (CRNs)
Unlike prior works, the NeuralCRNs framework is founded on dynamical system-based learning implementations and, thus, results in chemically compatible computations.
arXiv Detail & Related papers (2024-08-18T01:43:26Z) - Learning System Dynamics without Forgetting [60.08612207170659]
We investigate the problem of Continual Dynamics Learning (CDL), examining task configurations and evaluating the applicability of existing techniques.
We propose the Mode-switching Graph ODE (MS-GODE) model, which integrates the strengths LG-ODE and sub-network learning with a mode-switching module.
We construct a novel benchmark of biological dynamic systems for CDL, Bio-CDL, featuring diverse systems with disparate dynamics.
arXiv Detail & Related papers (2024-06-30T14:55:18Z) - CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration [7.26259898628108]
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
We propose a spiking neural network named CHANI, whose neurons activity is modeled by Hawkes processes.
arXiv Detail & Related papers (2024-05-29T07:17:58Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Path sampling of recurrent neural networks by incorporating known
physics [0.0]
We show a path sampling approach that allows us to include generic thermodynamic or kinetic constraints into recurrent neural networks.
We show the method here for a widely used type of recurrent neural network known as long short-term memory network.
Our method can be easily generalized to other generative artificial intelligence models and to generic time series in different areas of physical and social sciences.
arXiv Detail & Related papers (2022-03-01T16:35:50Z) - Deep Active Learning by Leveraging Training Dynamics [57.95155565319465]
We propose a theory-driven deep active learning method (dynamicAL) which selects samples to maximize training dynamics.
We show that dynamicAL not only outperforms other baselines consistently but also scales well on large deep learning models.
arXiv Detail & Related papers (2021-10-16T16:51:05Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Credit Assignment in Neural Networks through Deep Feedback Control [59.14935871979047]
Deep Feedback Control (DFC) is a new learning method that uses a feedback controller to drive a deep neural network to match a desired output target and whose control signal can be used for credit assignment.
The resulting learning rule is fully local in space and time and approximates Gauss-Newton optimization for a wide range of connectivity patterns.
To further underline its biological plausibility, we relate DFC to a multi-compartment model of cortical pyramidal neurons with a local voltage-dependent synaptic plasticity rule, consistent with recent theories of dendritic processing.
arXiv Detail & Related papers (2021-06-15T05:30:17Z) - Identifying nonclassicality from experimental data using artificial
neural networks [52.77024349608834]
We train an artificial neural network to classify classical and nonclassical states from their quadrature-measurement distributions.
We show that the network is able to correctly identify classical and nonclassical features from real experimental quadrature data for different states of light.
arXiv Detail & Related papers (2021-01-18T15:12:47Z) - Identifying Learning Rules From Neural Network Observables [26.96375335939315]
We show that different classes of learning rules can be separated solely on the basis of aggregate statistics of the weights, activations, or instantaneous layer-wise activity changes.
Our results suggest that activation patterns, available from electrophysiological recordings of post-synaptic activities, may provide a good basis on which to identify learning rules.
arXiv Detail & Related papers (2020-10-22T14:36:54Z) - Watch and learn -- a generalized approach for transferrable learning in
deep neural networks via physical principles [0.0]
We demonstrate an unsupervised learning approach that achieves fully transferrable learning for problems in statistical physics across different physical regimes.
By coupling a sequence model based on a recurrent neural network to an extensive deep neural network, we are able to learn the equilibrium probability distributions and inter-particle interaction models of classical statistical mechanical systems.
arXiv Detail & Related papers (2020-03-03T18:37:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.