TFTF: Training-Free Targeted Flow for Conditional Sampling
- URL: http://arxiv.org/abs/2602.12932v1
- Date: Fri, 13 Feb 2026 13:41:35 GMT
- Title: TFTF: Training-Free Targeted Flow for Conditional Sampling
- Authors: Qianqian Qu, Jun S. Liu,
- Abstract summary: We propose a training-free conditional sampling method for flow matching models based on importance sampling.<n>Because a nave application of importance sampling suffers from weighteneracy in high-dimensional settings, we modify and incorporate a resampling technique in sequential Monte Carlo.<n>Our framework requires no additional training, while providing theoretical guarantees of accuracy.
- Score: 1.4151684142137693
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a training-free conditional sampling method for flow matching models based on importance sampling. Because a naïve application of importance sampling suffers from weight degeneracy in high-dimensional settings, we modify and incorporate a resampling technique in sequential Monte Carlo (SMC) during intermediate stages of the generation process. To encourage generated samples to diverge along distinct trajectories, we derive a stochastic flow with adjustable noise strength to replace the deterministic flow at the intermediate stage. Our framework requires no additional training, while providing theoretical guarantees of asymptotic accuracy. Experimentally, our method significantly outperforms existing approaches on conditional sampling tasks for MNIST and CIFAR-10. We further demonstrate the applicability of our approach in higher-dimensional, multimodal settings through text-to-image generation experiments on CelebA-HQ.
Related papers
- Learning To Sample From Diffusion Models Via Inverse Reinforcement Learning [43.678382510171986]
Diffusion models generate samples through an iterative denoising process, guided by a neural network.<n>We introduce an inverse reinforcement learning framework for learning sampling strategies without retraining the denoiser.<n>We provide experimental evidence that this approach can improve the quality of samples generated by pretrained diffusion models.
arXiv Detail & Related papers (2026-02-09T14:10:44Z) - Euphonium: Steering Video Flow Matching via Process Reward Gradient Guided Stochastic Dynamics [49.242224984144904]
We propose Euphonium, a novel framework that steers generation via process reward gradient guided dynamics.<n>Our key insight is to formulate the sampling process as a theoretically principled algorithm that explicitly incorporates the gradient of a Process Reward Model.<n>We derive a distillation objective that internalizes the guidance signal into the flow network, eliminating inference-time dependency on the reward model.
arXiv Detail & Related papers (2026-02-04T08:59:57Z) - Sampling from multi-modal distributions on Riemannian manifolds with training-free stochastic interpolants [17.07401986649233]
We introduce a sampling algorithm based on the simulation of a non-equilibrium deterministic dynamics that transports an easy-to-sample noise distribution toward the target.<n>In contrast to related generative modeling approaches that rely on machine learning, our method is entirely training-free.
arXiv Detail & Related papers (2026-01-31T10:17:44Z) - Rethinking Refinement: Correcting Generative Bias without Noise Injection [7.28668585578288]
Generative models, including diffusion and flow-based models, often exhibit systematic biases that degrade sample quality.<n>We show that effective bias correction can be achieved as a post-hoc procedure, without noise injection or multi-step resampling.
arXiv Detail & Related papers (2026-01-29T02:34:08Z) - Reinforced sequential Monte Carlo for amortised sampling [49.92678178064033]
We state a connection between sequential Monte Carlo (SMC) and neural sequential samplers trained by maximum-entropy reinforcement learning (MaxEnt RL)<n>We describe techniques for stable joint training of proposals and twist functions and an adaptive weight tempering scheme to reduce training signal variance.
arXiv Detail & Related papers (2025-10-13T17:59:11Z) - Amortized Sampling with Transferable Normalizing Flows [65.48838168417564]
Prose is a transferable normalizing flow trained on a corpus of peptide molecular dynamics trajectories up to 8 residues in length.<n>We show that Prose is a proposal for a variety of sampling algorithms, finding a simple importance sampling-based finetuning procedure to achieve superior performance.<n>We open-source the Prose dataset to further stimulate research into amortized sampling methods and finetuning objectives.
arXiv Detail & Related papers (2025-08-25T16:28:18Z) - Inference-Time Scaling of Diffusion Language Models with Particle Gibbs Sampling [70.8832906871441]
We study how to steer generation toward desired rewards without retraining the models.<n>Prior methods typically resample or filter within a single denoising trajectory, optimizing rewards step-by-step without trajectory-level refinement.<n>We introduce particle Gibbs sampling for diffusion language models (PG-DLM), a novel inference-time algorithm enabling trajectory-level refinement while preserving generation perplexity.
arXiv Detail & Related papers (2025-07-11T08:00:47Z) - Non-equilibrium Annealed Adjoint Sampler [27.73022309947818]
We introduce the textbfNon-equilibrium Annealed Adjoint Sampler (NAAS), a novel SOC-based diffusion sampler.<n> NAAS employs a lean adjoint system inspired by adjoint matching, enabling efficient and scalable training.
arXiv Detail & Related papers (2025-06-22T20:41:31Z) - Quantizing Diffusion Models from a Sampling-Aware Perspective [43.95032520555463]
We propose a sampling-aware quantization strategy, wherein a Mixed-Order Trajectory Alignment technique is devised.<n>Experiments on sparse-step fast sampling across multiple datasets demonstrate that our approach preserves the rapid convergence characteristics of high-speed samplers.
arXiv Detail & Related papers (2025-05-04T20:50:44Z) - Feynman-Kac Correctors in Diffusion: Annealing, Guidance, and Product of Experts [64.34482582690927]
We provide an efficient and principled method for sampling from a sequence of annealed, geometric-averaged, or product distributions derived from pretrained score-based models.<n>We propose Sequential Monte Carlo (SMC) resampling algorithms that leverage inference-time scaling to improve sampling quality.
arXiv Detail & Related papers (2025-03-04T17:46:51Z) - Arbitrary-steps Image Super-resolution via Diffusion Inversion [68.78628844966019]
This study presents a new image super-resolution (SR) technique based on diffusion inversion, aiming at harnessing the rich image priors encapsulated in large pre-trained diffusion models to improve SR performance.<n>We design a Partial noise Prediction strategy to construct an intermediate state of the diffusion model, which serves as the starting sampling point.<n>Once trained, this noise predictor can be used to initialize the sampling process partially along the diffusion trajectory, generating the desirable high-resolution result.
arXiv Detail & Related papers (2024-12-12T07:24:13Z) - Adaptive teachers for amortized samplers [76.88721198565861]
We propose an adaptive training distribution (the teacher) to guide the training of the primary amortized sampler (the student)<n>We validate the effectiveness of this approach in a synthetic environment designed to present an exploration challenge.
arXiv Detail & Related papers (2024-10-02T11:33:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.