Path Gradients after Flow Matching
- URL: http://arxiv.org/abs/2505.10139v1
- Date: Thu, 15 May 2025 10:13:45 GMT
- Title: Path Gradients after Flow Matching
- Authors: Lorenz Vaitl, Leon Klein,
- Abstract summary: Flow Matching has helped speed up Continuous Normalizing Flows (CNFs)<n>We investigate the benefits of using path gradients to fine-tune CNFs initially trained by Flow Matching.<n>Our experiments show that this hybrid approach yields up to a threefold increase in sampling efficiency for molecular systems.
- Score: 2.07180164747172
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Boltzmann Generators have emerged as a promising machine learning tool for generating samples from equilibrium distributions of molecular systems using Normalizing Flows and importance weighting. Recently, Flow Matching has helped speed up Continuous Normalizing Flows (CNFs), scale them to more complex molecular systems, and minimize the length of the flow integration trajectories. We investigate the benefits of using path gradients to fine-tune CNFs initially trained by Flow Matching, in the setting where a target energy is known. Our experiments show that this hybrid approach yields up to a threefold increase in sampling efficiency for molecular systems, all while using the same model, a similar computational budget and without the need for additional sampling. Furthermore, by measuring the length of the flow trajectories during fine-tuning, we show that path gradients largely preserve the learned structure of the flow.
Related papers
- Scalable Equilibrium Sampling with Sequential Boltzmann Generators [60.00515282300297]
We extend the Boltzmann generator framework with two key contributions.<n>The first is a highly efficient Transformer-based normalizing flow operating directly on all-atom Cartesian coordinates.<n>In particular, we perform inference-time scaling of flow samples using a continuous-time variant of sequential Monte Carlo.
arXiv Detail & Related papers (2025-02-25T18:59:13Z) - Flow Perturbation to Accelerate Unbiased Sampling of Boltzmann distribution [2.103187931015573]
Flow-based generative models have been employed for sampling the Boltzmann distribution, but their application is hindered by the computational cost of obtaining the Jacobian of the flow.
We introduce the flow perturbation method, which incorporates optimized perturbations into the flow.
By reweighting trajectories generated by the perturbed flow, our method achieves unbiased sampling of the Boltzmann distribution with orders of magnitude speedup.
arXiv Detail & Related papers (2024-07-15T12:29:17Z) - Transition Path Sampling with Improved Off-Policy Training of Diffusion Path Samplers [10.210248065533133]
We introduce a novel approach that trains diffusion path samplers (DPS) to address the transition path sampling problem.<n>We reformulate the problem as an amortized sampling from the transition path distribution by minimizing the log-variance divergence between the path distribution induced by DPS and the transition path distribution.<n>We extensively evaluate our approach, termed TPS-DPS, on a synthetic system, small peptide, and challenging fast-folding proteins, demonstrating that it produces more realistic and diverse transition pathways than existing baselines.
arXiv Detail & Related papers (2024-05-30T11:32:42Z) - Verlet Flows: Exact-Likelihood Integrators for Flow-Based Generative Models [4.9425328004453375]
We present Verlet flows, a class of CNFs on an augmented state-space inspired by symplectic from Hamiltonian dynamics.
Verlet flows provide exact-likelihood generative models which generalize coupled flow architectures from a non-continuous setting while imposing minimal expressivity constraints.
On experiments over toy densities, we demonstrate that the variance of the commonly used Hutchinson trace estimator is unsuitable for importance sampling, whereas Verlet flows perform comparably to full autograd trace computations while being significantly faster.
arXiv Detail & Related papers (2024-05-05T03:47:56Z) - Transition Path Sampling with Boltzmann Generator-based MCMC Moves [49.69940954060636]
Current approaches to sample transition paths use Markov chain Monte Carlo and rely on time-intensive molecular dynamics simulations to find new paths.
Our approach operates in the latent space of a normalizing flow that maps from the molecule's Boltzmann distribution to a Gaussian, where we propose new paths without requiring molecular simulations.
arXiv Detail & Related papers (2023-12-08T20:05:33Z) - Gaussian Interpolation Flows [11.340847429991525]
This work investigates the well-posedness of simulation-free continuous normalizing flows built on Gaussian denoising.
We establish the Lipschitz regularity of the flow velocity field, the existence and uniqueness of the flow, and the continuity of the flow map.
We also study the stability of these flows in source distributions and perturbations of the velocity field, using the quadratic Wasserstein distance as a metric.
arXiv Detail & Related papers (2023-11-20T00:59:20Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Equivariant flow matching [0.9208007322096533]
We introduce equivariant flow matching, a new training objective for equivariant continuous normalizing flows (CNFs)
Equivariant flow matching exploits the physical symmetries of the target energy for efficient, simulation-free training of equivariant CNFs.
Our results show that the equivariant flow matching objective yields flows with shorter integration paths, improved sampling efficiency, and higher scalability compared to existing methods.
arXiv Detail & Related papers (2023-06-26T19:40:10Z) - Conditioning Normalizing Flows for Rare Event Sampling [61.005334495264194]
We propose a transition path sampling scheme based on neural-network generated configurations.
We show that this approach enables the resolution of both the thermodynamics and kinetics of the transition region.
arXiv Detail & Related papers (2022-07-29T07:56:10Z) - Manifold Interpolating Optimal-Transport Flows for Trajectory Inference [64.94020639760026]
We present a method called Manifold Interpolating Optimal-Transport Flow (MIOFlow)
MIOFlow learns, continuous population dynamics from static snapshot samples taken at sporadic timepoints.
We evaluate our method on simulated data with bifurcations and merges, as well as scRNA-seq data from embryoid body differentiation, and acute myeloid leukemia treatment.
arXiv Detail & Related papers (2022-06-29T22:19:03Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.