Generalized Flow Matching for Transition Dynamics Modeling
- URL: http://arxiv.org/abs/2410.15128v1
- Date: Sat, 19 Oct 2024 15:03:39 GMT
- Title: Generalized Flow Matching for Transition Dynamics Modeling
- Authors: Haibo Wang, Yuxuan Qiu, Yanze Wang, Rob Brekelmans, Yuanqi Du,
- Abstract summary: We propose a data-driven approach to warm-up the simulation by learning nonlinearities from local dynamics.
Specifically, we infer a potential energy function from local dynamics data to find plausible paths between two metastable states.
We validate the effectiveness of the proposed method to sample probable paths on both synthetic and real-world molecular systems.
- Score: 14.76793118877456
- License:
- Abstract: Simulating transition dynamics between metastable states is a fundamental challenge in dynamical systems and stochastic processes with wide real-world applications in understanding protein folding, chemical reactions and neural activities. However, the computational challenge often lies on sampling exponentially many paths in which only a small fraction ends in the target metastable state due to existence of high energy barriers. To amortize the cost, we propose a data-driven approach to warm-up the simulation by learning nonlinear interpolations from local dynamics. Specifically, we infer a potential energy function from local dynamics data. To find plausible paths between two metastable states, we formulate a generalized flow matching framework that learns a vector field to sample propable paths between the two marginal densities under the learned energy function. Furthermore, we iteratively refine the model by assigning importance weights to the sampled paths and buffering more likely paths for training. We validate the effectiveness of the proposed method to sample probable paths on both synthetic and real-world molecular systems.
Related papers
- Off-dynamics Conditional Diffusion Planners [15.321049697197447]
This work explores the use of more readily available, albeit off-dynamics datasets, to address the challenge of data scarcity in Offline RL.
We propose a novel approach using conditional Diffusion Probabilistic Models (DPMs) to learn the joint distribution of the large-scale off-dynamics dataset and the limited target dataset.
arXiv Detail & Related papers (2024-10-16T04:56:43Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - Dynamical Mean-Field Theory of Self-Attention Neural Networks [0.0]
Transformer-based models have demonstrated exceptional performance across diverse domains.
Little is known about how they operate or what are their expected dynamics.
We use methods for the study of asymmetric Hopfield networks in nonequilibrium regimes.
arXiv Detail & Related papers (2024-06-11T13:29:34Z) - Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers [10.210248065533133]
We introduce a novel approach that trains diffusion path samplers for transition path sampling.
We recast the problem as an amortized sampling of the target path measure.
We evaluate our approach, coined TPS-DPS, on a synthetic double-well potential and three peptides.
arXiv Detail & Related papers (2024-05-30T11:32:42Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Synthetic location trajectory generation using categorical diffusion
models [50.809683239937584]
Diffusion models (DPMs) have rapidly evolved to be one of the predominant generative models for the simulation of synthetic data.
We propose using DPMs for the generation of synthetic individual location trajectories (ILTs) which are sequences of variables representing physical locations visited by individuals.
arXiv Detail & Related papers (2024-02-19T15:57:39Z) - Amortized Network Intervention to Steer the Excitatory Point Processes [8.15558505134853]
Excitatory point processes (i.e., event flows) occurring over dynamic graphs provide a fine-grained model to capture how discrete events may spread over time and space.
How to effectively steer the event flows by modifying the dynamic graph structures presents an interesting problem, motivated by curbing the spread of infectious diseases.
We design an Amortized Network Interventions framework, allowing for the pooling of optimal policies from history and other contexts.
arXiv Detail & Related papers (2023-10-06T11:17:28Z) - Learning minimal representations of stochastic processes with
variational autoencoders [52.99137594502433]
We introduce an unsupervised machine learning approach to determine the minimal set of parameters required to describe a process.
Our approach enables for the autonomous discovery of unknown parameters describing processes.
arXiv Detail & Related papers (2023-07-21T14:25:06Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - Generative methods for sampling transition paths in molecular dynamics [0.0]
Simulating transition paths linking one metastable state to another one is difficult by direct numerical methods.
We explore two approaches to more efficiently generate transition paths: sampling methods based on generative models such as variational autoencoders, and importance sampling methods based on reinforcement learning.
arXiv Detail & Related papers (2022-05-05T17:50:10Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.