Learning System Parameters from Turing Patterns
- URL: http://arxiv.org/abs/2108.08542v1
- Date: Thu, 19 Aug 2021 08:04:37 GMT
- Title: Learning System Parameters from Turing Patterns
- Authors: David Schn\"orr, Christoph Schn\"orr
- Abstract summary: The Turing mechanism describes the emergence of spatial patterns due to spontaneous symmetry breaking in reaction-diffusion processes.
This paper introduces an approach to the prediction of Turing parameter values from observed Turing patterns.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Turing mechanism describes the emergence of spatial patterns due to
spontaneous symmetry breaking in reaction-diffusion processes and underlies
many developmental processes. Identifying Turing mechanisms in biological
systems defines a challenging problem. This paper introduces an approach to the
prediction of Turing parameter values from observed Turing patterns. The
parameter values correspond to a parametrized system of reaction-diffusion
equations that generate Turing patterns as steady state. The Gierer-Meinhardt
model with four parameters is chosen as a case study. A novel invariant pattern
representation based on resistance distance histograms is employed, along with
Wasserstein kernels, in order to cope with the highly variable arrangement of
local pattern structure that depends on the initial conditions which are
assumed to be unknown. This enables to compute physically plausible distances
between patterns, to compute clusters of patterns and, above all, model
parameter prediction: for small training sets, classical state-of-the-art
methods including operator-valued kernels outperform neural networks that are
applied to raw pattern data, whereas for large training sets the latter are
more accurate. Excellent predictions are obtained for single parameter values
and reasonably accurate results for jointly predicting all parameter values.
Related papers
- Stochastic parameter reduced-order model based on hybrid machine learning approaches [4.378407481656902]
This paper constructs a Convolutional Autoencoder-Reservoir Computing-Normalizing Flow algorithm framework.
The framework is used to characterize the evolution of latent state variables.
In this way, a data-driven reduced-order model is constructed to describe the complex system and its dynamic behavior.
arXiv Detail & Related papers (2024-03-24T06:52:37Z) - Deep Learning for Fast Inference of Mechanistic Models' Parameters [0.28675177318965045]
We propose using Deep Neural Networks (NN) for directly predicting parameters of mechanistic models given observations.
We consider a training procedure that combines Neural Networks and mechanistic models.
We find that, while Neural Network estimates are slightly improved by further fitting, these estimates are measurably better than the fitting procedure alone.
arXiv Detail & Related papers (2023-12-05T22:16:54Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Design of Turing Systems with Physics-Informed Neural Networks [0.0]
We investigate the use of physics-informed neural networks as a tool to infer key parameters in reaction-diffusion systems.
Our proof-of-concept results show that the method is able to infer parameters for different pattern modes and types with errors of less than 10%.
arXiv Detail & Related papers (2022-11-24T08:01:22Z) - Dynamically-Scaled Deep Canonical Correlation Analysis [77.34726150561087]
Canonical Correlation Analysis (CCA) is a method for feature extraction of two views by finding maximally correlated linear projections of them.
We introduce a novel dynamic scaling method for training an input-dependent canonical correlation model.
arXiv Detail & Related papers (2022-03-23T12:52:49Z) - DriPP: Driven Point Processes to Model Stimuli Induced Patterns in M/EEG
Signals [62.997667081978825]
We develop a novel statistical point process model-called driven temporal point processes (DriPP)
We derive a fast and principled expectation-maximization (EM) algorithm to estimate the parameters of this model.
Results on standard MEG datasets demonstrate that our methodology reveals event-related neural responses.
arXiv Detail & Related papers (2021-12-08T13:07:21Z) - Inverting brain grey matter models with likelihood-free inference: a
tool for trustable cytoarchitecture measurements [62.997667081978825]
characterisation of the brain grey matter cytoarchitecture with quantitative sensitivity to soma density and volume remains an unsolved challenge in dMRI.
We propose a new forward model, specifically a new system of equations, requiring a few relatively sparse b-shells.
We then apply modern tools from Bayesian analysis known as likelihood-free inference (LFI) to invert our proposed model.
arXiv Detail & Related papers (2021-11-15T09:08:27Z) - Combining data assimilation and machine learning to estimate parameters
of a convective-scale model [0.0]
Errors in the representation of clouds in convection-permitting numerical weather prediction models can be introduced by different sources.
In this work, we look at the problem of parameter estimation through an artificial intelligence lens by training two types of artificial neural networks.
arXiv Detail & Related papers (2021-09-07T09:17:29Z) - Post-mortem on a deep learning contest: a Simpson's paradox and the
complementary roles of scale metrics versus shape metrics [61.49826776409194]
We analyze a corpus of models made publicly-available for a contest to predict the generalization accuracy of neural network (NN) models.
We identify what amounts to a Simpson's paradox: where "scale" metrics perform well overall but perform poorly on sub partitions of the data.
We present two novel shape metrics, one data-independent, and the other data-dependent, which can predict trends in the test accuracy of a series of NNs.
arXiv Detail & Related papers (2021-06-01T19:19:49Z) - Variational inference formulation for a model-free simulation of a
dynamical system with unknown parameters by a recurrent neural network [8.616180927172548]
We propose a "model-free" simulation of a dynamical system with unknown parameters without prior knowledge.
The deep learning model aims to jointly learn the nonlinear time marching operator and the effects of the unknown parameters from a time series dataset.
It is found that the proposed deep learning model is capable of correctly identifying the dimensions of the random parameters and learning a representation of complex time series data.
arXiv Detail & Related papers (2020-03-02T20:57:02Z) - Generating diverse and natural text-to-speech samples using a quantized
fine-grained VAE and auto-regressive prosody prior [53.69310441063162]
This paper proposes a sequential prior in a discrete latent space which can generate more naturally sounding samples.
We evaluate the approach using listening tests, objective metrics of automatic speech recognition (ASR) performance, and measurements of prosody attributes.
arXiv Detail & Related papers (2020-02-06T12:35:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.