NeuralCRNs: A Natural Implementation of Learning in Chemical Reaction Networks
- URL: http://arxiv.org/abs/2409.00034v1
- Date: Sun, 18 Aug 2024 01:43:26 GMT
- Title: NeuralCRNs: A Natural Implementation of Learning in Chemical Reaction Networks
- Authors: Rajiv Teja Nagipogu, John H. Reif,
- Abstract summary: We present a novel supervised learning framework constructed as a collection of deterministic chemical reaction networks (CRNs)
Unlike prior works, the NeuralCRNs framework is founded on dynamical system-based learning implementations and, thus, results in chemically compatible computations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The remarkable ability of single-celled organisms to sense and react to the dynamic changes in their environment is a testament to the adaptive capabilities of their internal biochemical circuitry. One of the goals of synthetic biology is to develop biochemical analogues of such systems to autonomously monitor and control biochemical processes. Such systems may have impactful applications in fields such as molecular diagnostics, smart therapeutics, and in vivo nanomedicine. So far, the attempts to create such systems have been focused on functionally replicating the behavior of traditional feedforward networks in abstract and DNA-based synthetic chemistries. However, the inherent incompatibility between digital and chemical modes of computation introduces several nonidealities into these implementations, making it challenging to realize them in practice. In this work, we present NeuralCRNs, a novel supervised learning framework constructed as a collection of deterministic chemical reaction networks (CRNs). Unlike prior works, the NeuralCRNs framework is founded on dynamical system-based learning implementations and, thus, results in chemically compatible computations. First, we show the construction and training of a supervised learning classifier for linear classification. We then extend this framework to support nonlinear classification. We then demonstrate the validity of our constructions by training and evaluating them first on several binary and multi-class classification datasets with complex class separation boundaries. Finally, we detail several considerations regarding the NeuralCRNs framework and elaborate on the pros and cons of our methodology compared to the existing works.
Related papers
- Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics [0.0]
Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning.
We show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks.
This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.
arXiv Detail & Related papers (2024-09-26T10:00:05Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration [7.26259898628108]
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
We propose a spiking neural network named CHANI, whose neurons activity is modeled by Hawkes processes.
arXiv Detail & Related papers (2024-05-29T07:17:58Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Automatic Implementation of Neural Networks through Reaction Networks --
Part I: Circuit Design and Convergence Analysis [9.107489906506798]
This two-part article aims to introduce a programmable biochemical reaction network (BCRN) system endowed with mass action kinetics.
In part I, the feedforward propagation computation, the backpropagation component, and all bridging processes of FCNN are ingeniously designed as specific BCRN modules.
arXiv Detail & Related papers (2023-11-30T07:31:36Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Programming and Training Rate-Independent Chemical Reaction Networks [9.001036626196258]
Natural biochemical systems are typically modeled by chemical reaction networks (CRNs)
CRNs can be used as a specification language for synthetic chemical computation.
We show a technique to program NC-CRNs using well-founded deep learning methods.
arXiv Detail & Related papers (2021-09-20T15:31:03Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Deep Molecular Programming: A Natural Implementation of Binary-Weight
ReLU Neural Networks [7.700240949386079]
We show how a BinaryConnect neural network trained in silico can be compiled to an equivalent chemical reaction network.
Our work sets the stage for rich knowledge transfer between neural network and molecular programming communities.
arXiv Detail & Related papers (2020-03-30T18:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.