Neural Continuous-Time Markov Models
- URL: http://arxiv.org/abs/2212.05378v1
- Date: Sun, 11 Dec 2022 00:07:41 GMT
- Title: Neural Continuous-Time Markov Models
- Authors: Majerle Reeves and Harish S. Bhat
- Abstract summary: We develop a method to learn a continuous-time Markov chain's transition rate functions from fully observed time series.
We show that our method learns these transition rates with considerably more accuracy than log-linear methods.
- Score: 2.28438857884398
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Continuous-time Markov chains are used to model stochastic systems where
transitions can occur at irregular times, e.g., birth-death processes, chemical
reaction networks, population dynamics, and gene regulatory networks. We
develop a method to learn a continuous-time Markov chain's transition rate
functions from fully observed time series. In contrast with existing methods,
our method allows for transition rates to depend nonlinearly on both state
variables and external covariates. The Gillespie algorithm is used to generate
trajectories of stochastic systems where propensity functions (reaction rates)
are known. Our method can be viewed as the inverse: given trajectories of a
stochastic reaction network, we generate estimates of the propensity functions.
While previous methods used linear or log-linear methods to link transition
rates to covariates, we use neural networks, increasing the capacity and
potential accuracy of learned models. In the chemical context, this enables the
method to learn propensity functions from non-mass-action kinetics. We test our
method with synthetic data generated from a variety of systems with known
transition rates. We show that our method learns these transition rates with
considerably more accuracy than log-linear methods, in terms of mean absolute
error between ground truth and predicted transition rates. We also demonstrate
an application of our methods to open-loop control of a continuous-time Markov
chain.
Related papers
- Laplace Transform Based Low-Complexity Learning of Continuous Markov Semigroups [22.951644463554352]
This paper presents a data-driven approach for learning Markov processes through the spectral decomposition of the infinitesimal generator (IG) of the Markov semigroup.
Existing techniques, including physics-informed kernel regression, are computationally expensive and limited in scope.
We propose a novel method that leverages the IG's resolvent, characterized by the Laplace transform of transfer operators.
arXiv Detail & Related papers (2024-10-18T14:02:06Z) - From Biased to Unbiased Dynamics: An Infinitesimal Generator Approach [24.560340485988128]
We investigate learning the eigenfunctions of evolution operators for time-reversal invariant processes.
Many physical or chemical processes described by the Langevin equation involve transitions between metastable states separated by high potential barriers.
We propose a framework for learning from biased simulations rooted in the infinitesimal generator of the process and the associated resolvent operator.
arXiv Detail & Related papers (2024-06-13T12:02:51Z) - Machine learning in and out of equilibrium [58.88325379746631]
Our study uses a Fokker-Planck approach, adapted from statistical physics, to explore these parallels.
We focus in particular on the stationary state of the system in the long-time limit, which in conventional SGD is out of equilibrium.
We propose a new variation of Langevin dynamics (SGLD) that harnesses without replacement minibatching.
arXiv Detail & Related papers (2023-06-06T09:12:49Z) - Neural Markov Jump Processes [0.0]
We introduce an alternative, variational inference algorithm for Markov jump processes which relies on neural ordinary differential equations.
Our methodology learns neural, continuous-time representations of the observed data, that are used to approximate the initial distribution and time-dependent transition probability rates of the posterior Markov jump process.
We test our approach on synthetic data sampled from ground-truth Markov jump processes, experimental switching ion channel data and molecular dynamics simulations.
arXiv Detail & Related papers (2023-05-31T11:10:29Z) - Formal Controller Synthesis for Markov Jump Linear Systems with
Uncertain Dynamics [64.72260320446158]
We propose a method for synthesising controllers for Markov jump linear systems.
Our method is based on a finite-state abstraction that captures both the discrete (mode-jumping) and continuous (stochastic linear) behaviour of the MJLS.
We apply our method to multiple realistic benchmark problems, in particular, a temperature control and an aerial vehicle delivery problem.
arXiv Detail & Related papers (2022-12-01T17:36:30Z) - Score-based Continuous-time Discrete Diffusion Models [102.65769839899315]
We extend diffusion models to discrete variables by introducing a Markov jump process where the reverse process denoises via a continuous-time Markov chain.
We show that an unbiased estimator can be obtained via simple matching the conditional marginal distributions.
We demonstrate the effectiveness of the proposed method on a set of synthetic and real-world music and image benchmarks.
arXiv Detail & Related papers (2022-11-30T05:33:29Z) - Markov Chain Monte Carlo for Continuous-Time Switching Dynamical Systems [26.744964200606784]
We propose a novel inference algorithm utilizing a Markov Chain Monte Carlo approach.
The presented Gibbs sampler allows to efficiently obtain samples from the exact continuous-time posterior processes.
arXiv Detail & Related papers (2022-05-18T09:03:00Z) - Stochastic Trajectory Prediction via Motion Indeterminacy Diffusion [88.45326906116165]
We present a new framework to formulate the trajectory prediction task as a reverse process of motion indeterminacy diffusion (MID)
We encode the history behavior information and the social interactions as a state embedding and devise a Transformer-based diffusion model to capture the temporal dependencies of trajectories.
Experiments on the human trajectory prediction benchmarks including the Stanford Drone and ETH/UCY datasets demonstrate the superiority of our method.
arXiv Detail & Related papers (2022-03-25T16:59:08Z) - Variational Inference for Continuous-Time Switching Dynamical Systems [29.984955043675157]
We present a model based on an Markov jump process modulating a subordinated diffusion process.
We develop a new continuous-time variational inference algorithm.
We extensively evaluate our algorithm under the model assumption and for real-world examples.
arXiv Detail & Related papers (2021-09-29T15:19:51Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - Training Generative Adversarial Networks by Solving Ordinary
Differential Equations [54.23691425062034]
We study the continuous-time dynamics induced by GAN training.
From this perspective, we hypothesise that instabilities in training GANs arise from the integration error.
We experimentally verify that well-known ODE solvers (such as Runge-Kutta) can stabilise training.
arXiv Detail & Related papers (2020-10-28T15:23:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.