Automated Deep Abstractions for Stochastic Chemical Reaction Networks
- URL: http://arxiv.org/abs/2002.01889v1
- Date: Thu, 30 Jan 2020 13:49:58 GMT
- Title: Automated Deep Abstractions for Stochastic Chemical Reaction Networks
- Authors: Tatjana Petrov and Denis Repin
- Abstract summary: Low-level chemical reaction network (CRN) models give raise to a highly-dimensional continuous-time Markov chain (CTMC)
A recently proposed abstraction method uses deep learning to replace this CTMC with a discrete-time continuous-space process.
In this paper, we propose to further automatise deep abstractions for CRNs, through learning the optimal neural network architecture.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Predicting stochastic cellular dynamics as emerging from the mechanistic
models of molecular interactions is a long-standing challenge in systems
biology: low-level chemical reaction network (CRN) models give raise to a
highly-dimensional continuous-time Markov chain (CTMC) which is computationally
demanding and often prohibitive to analyse in practice. A recently proposed
abstraction method uses deep learning to replace this CTMC with a discrete-time
continuous-space process, by training a mixture density deep neural network
with traces sampled at regular time intervals (which can obtained either by
simulating a given CRN or as time-series data from experiment). The major
advantage of such abstraction is that it produces a computational model that is
dramatically cheaper to execute, while preserving the statistical features of
the training data. In general, the abstraction accuracy improves with the
amount of training data. However, depending on a CRN, the overall quality of
the method -- the efficiency gain and abstraction accuracy -- will also depend
on the choice of neural network architecture given by hyper-parameters such as
the layer types and connections between them. As a consequence, in practice,
the modeller would have to take care of finding the suitable architecture
manually, for each given CRN, through a tedious and time-consuming
trial-and-error cycle. In this paper, we propose to further automatise deep
abstractions for stochastic CRNs, through learning the optimal neural network
architecture along with learning the transition kernel of the abstract process.
Automated search of the architecture makes the method applicable directly to
any given CRN, which is time-saving for deep learning experts and crucial for
non-specialists. We implement the method and demonstrate its performance on a
number of representative CRNs with multi-modal emergent phenotypes.
Related papers
- Parameter Estimation of Long Memory Stochastic Processes with Deep Neural Networks [0.0]
We present a purely deep neural network-based approach for estimating long memory parameters of time series models.
Parameters, such as the Hurst exponent, are critical in characterizing the long-range dependence, roughness, and self-similarity of processes.
arXiv Detail & Related papers (2024-10-03T03:14:58Z) - Self-STORM: Deep Unrolled Self-Supervised Learning for Super-Resolution Microscopy [55.2480439325792]
We introduce deep unrolled self-supervised learning, which alleviates the need for such data by training a sequence-specific, model-based autoencoder.
Our proposed method exceeds the performance of its supervised counterparts.
arXiv Detail & Related papers (2024-03-25T17:40:32Z) - Knowledge Enhanced Conditional Imputation for Healthcare Time-series [9.937117045677923]
Conditional Self-Attention Imputation (CSAI) is a novel recurrent neural network architecture designed to address the challenges of complex missing data patterns.
CSAI extends the current state-of-the-art neural network-based imputation methods by introducing key modifications specifically adapted to EHR data characteristics.
This work significantly advances the state of neural network imputation applied to EHRs by more closely aligning algorithmic imputation with clinical realities.
arXiv Detail & Related papers (2023-12-27T20:42:40Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning [2.31120983784623]
In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used.
This paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism.
With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition.
arXiv Detail & Related papers (2022-11-15T05:29:58Z) - Oscillatory Fourier Neural Network: A Compact and Efficient Architecture
for Sequential Processing [16.69710555668727]
We propose a novel neuron model that has cosine activation with a time varying component for sequential processing.
The proposed neuron provides an efficient building block for projecting sequential inputs into spectral domain.
Applying the proposed model to sentiment analysis on IMDB dataset reaches 89.4% test accuracy within 5 epochs.
arXiv Detail & Related papers (2021-09-14T19:08:07Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Action-Conditional Recurrent Kalman Networks For Forward and Inverse
Dynamics Learning [17.80270555749689]
Estimating accurate forward and inverse dynamics models is a crucial component of model-based control for robots.
We present two architectures for forward model learning and one for inverse model learning.
Both architectures significantly outperform exist-ing model learning frameworks as well as analytical models in terms of prediction performance.
arXiv Detail & Related papers (2020-10-20T11:28:25Z) - Towards an Automatic Analysis of CHO-K1 Suspension Growth in
Microfluidic Single-cell Cultivation [63.94623495501023]
We propose a novel Machine Learning architecture, which allows us to infuse a neural deep network with human-powered abstraction on the level of data.
Specifically, we train a generative model simultaneously on natural and synthetic data, so that it learns a shared representation, from which a target variable, such as the cell count, can be reliably estimated.
arXiv Detail & Related papers (2020-10-20T08:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.