Neural CRNs: A Natural Implementation of Learning in Chemical Reaction Networks
- URL: http://arxiv.org/abs/2409.00034v4
- Date: Mon, 08 Sep 2025 17:30:24 GMT
- Title: Neural CRNs: A Natural Implementation of Learning in Chemical Reaction Networks
- Authors: Rajiv Teja Nagipogu, John H. Reif,
- Abstract summary: Molecular circuits capable of autonomous learning could unlock novel applications in fields such as bioengineering and synthetic biology.<n>We propose an alternative dynamical systems-based approach in which neural computations are modeled as the time evolution of molecular concentrations.<n>Our work presents a viable pathway toward embedding learning behaviors in synthetic biochemical systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Molecular circuits capable of autonomous learning could unlock novel applications in fields such as bioengineering and synthetic biology. To this end, existing chemical implementations of neural computing have mainly relied on emulating discrete-layered neural architectures using steady-state computations of mass action kinetics. In contrast, we propose an alternative dynamical systems-based approach in which neural computations are modeled as the time evolution of molecular concentrations. The analog nature of our framework naturally aligns with chemical kinetics-based computation, leading to more compact circuits. We present the advantages of our framework through three key demonstrations. First, we assemble an end-to-end supervised learning pipeline using only two sequential phases, the minimum required number for supervised learning. Then, we show (through appropriate simplifications) that both linear and nonlinear modeling circuits can be implemented solely using unimolecular and bimolecular reactions, avoiding the complexities of higher-order chemistries. Finally, we demonstrate that first-order gradient approximations can be natively incorporated into the framework, enabling nonlinear models to scale linearly rather than combinatorially with input dimensionality. All the circuit constructions are validated through training and inference simulations across various regression and classification tasks. Our work presents a viable pathway toward embedding learning behaviors in synthetic biochemical systems.
Related papers
- A Solvable Molecular Switch Model for Stable Temporal Information Processing [0.0]
The linear-in-the-state and nonlinear-in-the-input model is exactly solvable, and it also possesses mathematical properties of convergence and fading memory.<n>The results give theoretical support for the use of the dynamic molecular switches as computational units in deep cascaded/layered feedforward and recurrent architectures.<n>They could also inspire more general exactly solvable models that can be fitted to emulate arbitrary physical devices which can mimic brain-inspired behaviour and perform stable computation on input signals.
arXiv Detail & Related papers (2025-08-21T11:13:56Z) - Principled Approaches for Extending Neural Architectures to Function Spaces for Operator Learning [78.88684753303794]
Deep learning has predominantly advanced through applications in computer vision and natural language processing.<n>Neural operators are a principled way to generalize neural networks to mappings between function spaces.<n>This paper identifies and distills the key principles for constructing practical implementations of mappings between infinite-dimensional function spaces.
arXiv Detail & Related papers (2025-06-12T17:59:31Z) - Learning Linear Attention in Polynomial Time [127.14106124669645]
We provide the first results on learnability of single-layer Transformers with linear attention.<n>We show that linear attention may be viewed as a linear predictor in a suitably defined RKHS.<n>We show how to efficiently identify training datasets for which every empirical riskr is equivalent to the linear Transformer.
arXiv Detail & Related papers (2024-10-14T02:41:01Z) - Spatial embedding promotes a specific form of modularity with low entropy and heterogeneous spectral dynamics [0.0]
Spatially embedded recurrent neural networks provide a promising avenue to study how modelled constraints shape the combined structural and functional organisation of networks over learning.
We show that it is possible to study these restrictions through entropic measures of the neural weights and eigenspectrum, across both rate and spiking neural networks.
This work deepens our understanding of constrained learning in neural networks, across coding schemes and tasks, where solutions to simultaneous structural and functional objectives must be accomplished in tandem.
arXiv Detail & Related papers (2024-09-26T10:00:05Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - CHANI: Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration [7.26259898628108]
The present work aims at proving mathematically that a neural network inspired by biology can learn a classification task thanks to local transformations only.
We propose a spiking neural network named CHANI, whose neurons activity is modeled by Hawkes processes.
arXiv Detail & Related papers (2024-05-29T07:17:58Z) - Single Neuromorphic Memristor closely Emulates Multiple Synaptic
Mechanisms for Energy Efficient Neural Networks [71.79257685917058]
We demonstrate memristive nano-devices based on SrTiO3 that inherently emulate all these synaptic functions.
These memristors operate in a non-filamentary, low conductance regime, which enables stable and energy efficient operation.
arXiv Detail & Related papers (2024-02-26T15:01:54Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Automatic Implementation of Neural Networks through Reaction Networks --
Part I: Circuit Design and Convergence Analysis [9.107489906506798]
This two-part article aims to introduce a programmable biochemical reaction network (BCRN) system endowed with mass action kinetics.
In part I, the feedforward propagation computation, the backpropagation component, and all bridging processes of FCNN are ingeniously designed as specific BCRN modules.
arXiv Detail & Related papers (2023-11-30T07:31:36Z) - Learning with Chemical versus Electrical Synapses -- Does it Make a
Difference? [61.85704286298537]
Bio-inspired neural networks have the potential to advance our understanding of neural computation and improve the state-of-the-art of AI systems.
We conduct experiments with autonomous lane-keeping through a photorealistic autonomous driving simulator to evaluate their performance under diverse conditions.
arXiv Detail & Related papers (2023-11-21T13:07:20Z) - On the Trade-off Between Efficiency and Precision of Neural Abstraction [62.046646433536104]
Neural abstractions have been recently introduced as formal approximations of complex, nonlinear dynamical models.
We employ formal inductive synthesis procedures to generate neural abstractions that result in dynamical models with these semantics.
arXiv Detail & Related papers (2023-07-28T13:22:32Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Differentiable Programming of Chemical Reaction Networks [63.948465205530916]
Chemical reaction networks are one of the most fundamental computational substrates used by nature.
We study well-mixed single-chamber systems, as well as systems with multiple chambers separated by membranes.
We demonstrate that differentiable optimisation, combined with proper regularisation, can discover non-trivial sparse reaction networks.
arXiv Detail & Related papers (2023-02-06T11:41:14Z) - Scalable Nanophotonic-Electronic Spiking Neural Networks [3.9918594409417576]
Spiking neural networks (SNN) provide a new computational paradigm capable of highly parallelized, real-time processing.
Photonic devices are ideal for the design of high-bandwidth, parallel architectures matching the SNN computational paradigm.
Co-integrated CMOS and SiPh technologies are well-suited to the design of scalable SNN computing architectures.
arXiv Detail & Related papers (2022-08-28T06:10:06Z) - Programming molecular systems to emulate a learning spiking neuron [1.2707050104493216]
Hebbian theory seeks to explain how the neurons in the brain adapt to stimuli, to enable learning.
This paper explores how molecular systems can be designed to show such proto-intelligent behaviours.
We propose the first chemical reaction network that can exhibit autonomous Hebbian learning across arbitrarily many input channels.
arXiv Detail & Related papers (2022-05-09T09:21:40Z) - Pretraining Graph Neural Networks for few-shot Analog Circuit Modeling
and Design [68.1682448368636]
We present a supervised pretraining approach to learn circuit representations that can be adapted to new unseen topologies or unseen prediction tasks.
To cope with the variable topological structure of different circuits we describe each circuit as a graph and use graph neural networks (GNNs) to learn node embeddings.
We show that pretraining GNNs on prediction of output node voltages can encourage learning representations that can be adapted to new unseen topologies or prediction of new circuit level properties.
arXiv Detail & Related papers (2022-03-29T21:18:47Z) - Super-resolution in Molecular Dynamics Trajectory Reconstruction with
Bi-Directional Neural Networks [0.0]
We explore different machine learning (ML) methodologies to increase the resolution of molecular dynamics trajectories on-demand within a post-processing step.
We have found that Bi-LSTMs are the best performing models; by utilizing the local time-symmetry of thermostated trajectories they can even learn long-range correlations and display high robustness to noisy dynamics across molecular complexity.
arXiv Detail & Related papers (2022-01-02T23:00:30Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Programming and Training Rate-Independent Chemical Reaction Networks [9.001036626196258]
Natural biochemical systems are typically modeled by chemical reaction networks (CRNs)
CRNs can be used as a specification language for synthetic chemical computation.
We show a technique to program NC-CRNs using well-founded deep learning methods.
arXiv Detail & Related papers (2021-09-20T15:31:03Z) - Accelerated Simulations of Molecular Systems through Learning of their
Effective Dynamics [4.276697874428501]
We present a novel framework to advance simulation by up to three orders of magnitude.
LED learns the effective dynamics of molecular systems.
We demonstrate the effectiveness of LED in the M"ueller-Brown potential, the Trp Cage protein, and the alanine dipeptide.
arXiv Detail & Related papers (2021-02-17T15:15:37Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Training End-to-End Analog Neural Networks with Equilibrium Propagation [64.0476282000118]
We introduce a principled method to train end-to-end analog neural networks by gradient descent.
We show mathematically that a class of analog neural networks (called nonlinear resistive networks) are energy-based models.
Our work can guide the development of a new generation of ultra-fast, compact and low-power neural networks supporting on-chip learning.
arXiv Detail & Related papers (2020-06-02T23:38:35Z) - Recurrent Neural Network Learning of Performance and Intrinsic
Population Dynamics from Sparse Neural Data [77.92736596690297]
We introduce a novel training strategy that allows learning not only the input-output behavior of an RNN but also its internal network dynamics.
We test the proposed method by training an RNN to simultaneously reproduce internal dynamics and output signals of a physiologically-inspired neural model.
Remarkably, we show that the reproduction of the internal dynamics is successful even when the training algorithm relies on the activities of a small subset of neurons.
arXiv Detail & Related papers (2020-05-05T14:16:54Z) - Deep Molecular Programming: A Natural Implementation of Binary-Weight
ReLU Neural Networks [7.700240949386079]
We show how a BinaryConnect neural network trained in silico can be compiled to an equivalent chemical reaction network.
Our work sets the stage for rich knowledge transfer between neural network and molecular programming communities.
arXiv Detail & Related papers (2020-03-30T18:12:11Z) - Autonomous Discovery of Unknown Reaction Pathways from Data by Chemical
Reaction Neural Network [0.0]
Chemical reactions occur in energy, environmental, biological, and many other natural systems.
Here, we present a neural network approach that autonomously discovers reaction pathways from the time-resolved species concentration data.
The proposed Chemical Reaction Neural Network (CRNN), by design, satisfies the fundamental physics laws, including the Law of Mass Action and the Arrhenius Law.
arXiv Detail & Related papers (2020-02-20T23:36:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.