ESS-Flow: Training-free guidance of flow-based models as inference in source space
- URL: http://arxiv.org/abs/2510.05849v1
- Date: Tue, 07 Oct 2025 12:11:58 GMT
- Title: ESS-Flow: Training-free guidance of flow-based models as inference in source space
- Authors: Adhithyan Kalaivanan, Zheng Zhao, Jens Sjölund, Fredrik Lindsten,
- Abstract summary: We present ESS-Flow, a gradient-free method that leverages the typically Gaussian prior to the source distribution in flow-based models to perform inference directly in the source space using Slice Sampling.<n>We demonstrate its effectiveness on designing materials with desired target properties and predicting protein structures from sparse inter-residue distance measurements.
- Score: 15.077556801319693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Guiding pretrained flow-based generative models for conditional generation or to produce samples with desired target properties enables solving diverse tasks without retraining on paired data. We present ESS-Flow, a gradient-free method that leverages the typically Gaussian prior of the source distribution in flow-based models to perform Bayesian inference directly in the source space using Elliptical Slice Sampling. ESS-Flow only requires forward passes through the generative model and observation process, no gradient or Jacobian computations, and is applicable even when gradients are unreliable or unavailable, such as with simulation-based observations or quantization in the generation or observation process. We demonstrate its effectiveness on designing materials with desired target properties and predicting protein structures from sparse inter-residue distance measurements.
Related papers
- DAISI: Data Assimilation with Inverse Sampling using Stochastic Interpolants [12.587156528707796]
We introduce DAISI, a scalable filtering algorithm built on flow-based generative models.<n>We show that DAISI achieves accurate filtering results in regimes with sparse, noisy, and nonlinear observations.
arXiv Detail & Related papers (2025-11-29T00:02:45Z) - Nonparametric Data Attribution for Diffusion Models [57.820618036556084]
Data attribution for generative models seeks to quantify the influence of individual training examples on model outputs.<n>We propose a nonparametric attribution method that operates entirely on data, measuring influence via patch-level similarity between generated and training images.
arXiv Detail & Related papers (2025-10-16T03:37:16Z) - ContinualFlow: Learning and Unlearning with Neural Flow Matching [13.628458744188325]
We introduce ContinualFlow, a principled framework for targeted unlearning in generative models via Flow Matching.<n>Our method leverages an energy-based reweighting loss to softly subtract undesired regions of the data distribution without retraining from scratch or requiring direct access to the samples to be unlearned.
arXiv Detail & Related papers (2025-06-23T15:20:58Z) - Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts [64.34482582690927]
We provide an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models.<n>We propose Sequential Monte Carlo (SMC) resampling algorithms that leverage inference-time scaling to improve sampling quality.
arXiv Detail & Related papers (2025-03-04T17:46:51Z) - Generative prediction of flow fields around an obstacle using the diffusion model [12.094115138998745]
We propose a geometry-to-flow diffusion model that utilizes obstacle shape as input to predict a flow field around an obstacle.<n>A Markov process is conditioned on the obstacle geometry, estimating the noise to be removed at each step.<n>We train the geometry-to-flow diffusion model using a dataset of flows around simple obstacles, including circles, ellipses, rectangles, and triangles.
arXiv Detail & Related papers (2024-06-30T15:48:57Z) - Guided Flows for Generative Modeling and Decision Making [55.42634941614435]
We show that Guided Flows significantly improves the sample quality in conditional image generation and zero-shot text synthesis-to-speech.
Notably, we are first to apply flow models for plan generation in the offline reinforcement learning setting ax speedup in compared to diffusion models.
arXiv Detail & Related papers (2023-11-22T15:07:59Z) - Uncertainty quantification and out-of-distribution detection using
surjective normalizing flows [46.51077762143714]
We propose a simple approach using surjective normalizing flows to identify out-of-distribution data sets in deep neural network models.
We show that our method can reliably discern out-of-distribution data from in-distribution data.
arXiv Detail & Related papers (2023-11-01T09:08:35Z) - Diffusion Generative Flow Samplers: Improving learning signals through
partial trajectory optimization [87.21285093582446]
Diffusion Generative Flow Samplers (DGFS) is a sampling-based framework where the learning process can be tractably broken down into short partial trajectory segments.
Our method takes inspiration from the theory developed for generative flow networks (GFlowNets)
arXiv Detail & Related papers (2023-10-04T09:39:05Z) - Nonlinear Isometric Manifold Learning for Injective Normalizing Flows [58.720142291102135]
We use isometries to separate manifold learning and density estimation.
We also employ autoencoders to design embeddings with explicit inverses that do not distort the probability distribution.
arXiv Detail & Related papers (2022-03-08T08:57:43Z) - Discrete Denoising Flows [87.44537620217673]
We introduce a new discrete flow-based model for categorical random variables: Discrete Denoising Flows (DDFs)
In contrast with other discrete flow-based models, our model can be locally trained without introducing gradient bias.
We show that DDFs outperform Discrete Flows on modeling a toy example, binary MNIST and Cityscapes segmentation maps, measured in log-likelihood.
arXiv Detail & Related papers (2021-07-24T14:47:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.