Neural-network solutions to stochastic reaction networks
- URL: http://arxiv.org/abs/2210.01169v1
- Date: Thu, 29 Sep 2022 07:27:59 GMT
- Title: Neural-network solutions to stochastic reaction networks
- Authors: Ying Tang, Jiayu Weng, Pan Zhang
- Abstract summary: We propose a machine-learning approach using the variational autoregressive network to solve the chemical master equation.
The proposed approach tracks the time evolution of the joint probability distribution in the state space of species counts.
We demonstrate that it accurately generates the probability distribution over time in the genetic toggle switch and the early life self-replicator.
- Score: 7.021105583098606
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The stochastic reaction network is widely used to model stochastic processes
in physics, chemistry and biology. However, the size of the state space
increases exponentially with the number of species, making it challenging to
investigate the time evolution of the chemical master equation for the reaction
network. Here, we propose a machine-learning approach using the variational
autoregressive network to solve the chemical master equation. The approach is
based on the reinforcement learning framework and does not require any data
simulated in prior by another method. Different from simulating single
trajectories, the proposed approach tracks the time evolution of the joint
probability distribution in the state space of species counts, and supports
direct sampling on configurations and computing their normalized joint
probabilities. We apply the approach to various systems in physics and biology,
and demonstrate that it accurately generates the probability distribution over
time in the genetic toggle switch, the early life self-replicator, the epidemic
model and the intracellular signaling cascade. The variational autoregressive
network exhibits a plasticity in representing the multi-modal distribution by
feedback regulations, cooperates with the conservation law, enables
time-dependent reaction rates, and is efficient for high-dimensional reaction
networks with allowing a flexible upper count limit. The results suggest a
general approach to investigate stochastic reaction networks based on modern
machine learning.
Related papers
- Learning Theory of Distribution Regression with Neural Networks [6.961253535504979]
We establish an approximation theory and a learning theory of distribution regression via a fully connected neural network (FNN)
In contrast to the classical regression methods, the input variables of distribution regression are probability measures.
arXiv Detail & Related papers (2023-07-07T09:49:11Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Bayesian Inference for Jump-Diffusion Approximations of Biochemical
Reaction Networks [26.744964200606784]
We develop a tractable Bayesian inference algorithm based on Markov chain Monte Carlo.
The algorithm is numerically evaluated for a partially observed multi-scale birth-death process example.
arXiv Detail & Related papers (2023-04-13T14:57:22Z) - Differentiable Programming of Chemical Reaction Networks [63.948465205530916]
Chemical reaction networks are one of the most fundamental computational substrates used by nature.
We study well-mixed single-chamber systems, as well as systems with multiple chambers separated by membranes.
We demonstrate that differentiable optimisation, combined with proper regularisation, can discover non-trivial sparse reaction networks.
arXiv Detail & Related papers (2023-02-06T11:41:14Z) - Path sampling of recurrent neural networks by incorporating known
physics [0.0]
We show a path sampling approach that allows us to include generic thermodynamic or kinetic constraints into recurrent neural networks.
We show the method here for a widely used type of recurrent neural network known as long short-term memory network.
Our method can be easily generalized to other generative artificial intelligence models and to generic time series in different areas of physical and social sciences.
arXiv Detail & Related papers (2022-03-01T16:35:50Z) - Influence Estimation and Maximization via Neural Mean-Field Dynamics [60.91291234832546]
We propose a novel learning framework using neural mean-field (NMF) dynamics for inference and estimation problems.
Our framework can simultaneously learn the structure of the diffusion network and the evolution of node infection probabilities.
arXiv Detail & Related papers (2021-06-03T00:02:05Z) - Deep learning approaches to surrogates for solving the diffusion
equation for mechanistic real-world simulations [0.0]
In medical, biological, physical and engineered models the numerical solution of partial differential equations (PDEs) can make simulations impractically slow.
Machine learning surrogates, neural networks trained to provide approximate solutions to such complicated numerical problems, can often provide speed-ups of several orders of magnitude compared to direct calculation.
We use a Convolutional Neural Network to approximate the stationary solution to the diffusion equation in the case of two equal-diameter, circular, constant-value sources.
arXiv Detail & Related papers (2021-02-10T16:15:17Z) - Combining Differentiable PDE Solvers and Graph Neural Networks for Fluid
Flow Prediction [79.81193813215872]
We develop a hybrid (graph) neural network that combines a traditional graph convolutional network with an embedded differentiable fluid dynamics simulator inside the network itself.
We show that we can both generalize well to new situations and benefit from the substantial speedup of neural network CFD predictions.
arXiv Detail & Related papers (2020-07-08T21:23:19Z) - Network Diffusions via Neural Mean-Field Dynamics [52.091487866968286]
We propose a novel learning framework for inference and estimation problems of diffusion on networks.
Our framework is derived from the Mori-Zwanzig formalism to obtain an exact evolution of the node infection probabilities.
Our approach is versatile and robust to variations of the underlying diffusion network models.
arXiv Detail & Related papers (2020-06-16T18:45:20Z) - Watch and learn -- a generalized approach for transferrable learning in
deep neural networks via physical principles [0.0]
We demonstrate an unsupervised learning approach that achieves fully transferrable learning for problems in statistical physics across different physical regimes.
By coupling a sequence model based on a recurrent neural network to an extensive deep neural network, we are able to learn the equilibrium probability distributions and inter-particle interaction models of classical statistical mechanical systems.
arXiv Detail & Related papers (2020-03-03T18:37:23Z) - Retrosynthesis Prediction with Conditional Graph Logic Network [118.70437805407728]
Computer-aided retrosynthesis is finding renewed interest from both chemistry and computer science communities.
We propose a new approach to this task using the Conditional Graph Logic Network, a conditional graphical model built upon graph neural networks.
arXiv Detail & Related papers (2020-01-06T05:36:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.