FSampler: Training Free Acceleration of Diffusion Sampling via Epsilon Extrapolation
- URL: http://arxiv.org/abs/2511.09180v1
- Date: Thu, 13 Nov 2025 01:37:49 GMT
- Title: FSampler: Training Free Acceleration of Diffusion Sampling via Epsilon Extrapolation
- Authors: Michael A. Vladimir,
- Abstract summary: FSampler is a training free, sampler execution layer that accelerates diffusion sampling by reducing the number of function evaluations (NFE)<n>FSampler maintains a short history of denoising signals from recent real model calls and extrapolates the next epsilon using finite difference predictors.<n> operating at the sampler level, FSampler integrates with Euler/DDIM, DPM++ 2M/2S, LMS/AB2, and RES family exponential multistep methods.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: FSampler is a training free, sampler agnostic execution layer that accelerates diffusion sampling by reducing the number of function evaluations (NFE). FSampler maintains a short history of denoising signals (epsilon) from recent real model calls and extrapolates the next epsilon using finite difference predictors at second order, third order, or fourth order, falling back to lower order when history is insufficient. On selected steps the predicted epsilon substitutes the model call while keeping each sampler's update rule unchanged. Predicted epsilons are validated for finiteness and magnitude; a learning stabilizer rescales predictions on skipped steps to correct drift, and an optional gradient estimation stabilizer compensates local curvature. Protected windows, periodic anchors, and a cap on consecutive skips bound deviation over the trajectory. Operating at the sampler level, FSampler integrates with Euler/DDIM, DPM++ 2M/2S, LMS/AB2, and RES family exponential multistep methods and drops into standard workflows. FLUX.1 dev, Qwen Image, and Wan 2.2, FSampler reduces time by 8 to 22% and model calls by 15 to 25% at high fidelity (Structural Similarity Index (SSIM) 0.95 to 0.99), without altering sampler formulas. With an aggressive adaptive gate, reductions can reach 45 to 50% fewer model calls at lower fidelity (SSIM 0.73 to 0.74).
Related papers
- Lookahead Sample Reward Guidance for Test-Time Scaling of Diffusion Models [28.29554194279748]
Diffusion models have demonstrated strong generative performance; however, generated samples often fail to fully align with human intent.<n>This paper studies a test-time scaling method that enables sampling from regions with higher human-aligned reward values.
arXiv Detail & Related papers (2026-02-03T07:27:27Z) - Rethinking Refinement: Correcting Generative Bias without Noise Injection [7.28668585578288]
Generative models, including diffusion and flow-based models, often exhibit systematic biases that degrade sample quality.<n>We show that effective bias correction can be achieved as a post-hoc procedure, without noise injection or multi-step resampling.
arXiv Detail & Related papers (2026-01-29T02:34:08Z) - Joint Distillation for Fast Likelihood Evaluation and Sampling in Flow-based Models [100.28111930893188]
Some of today's best generative models still require hundreds to thousands of neural function evaluations to compute a single likelihood.<n>We present fast flow joint distillation (F2D2), a framework that simultaneously reduces the number of NFEs required for both sampling and likelihood evaluation by two orders of magnitude.<n>F2D2 is modular, compatible with existing flow-based few-step sampling models, and requires only an additional divergence prediction head.
arXiv Detail & Related papers (2025-12-02T10:48:20Z) - HyperFlow: Gradient-Free Emulation of Few-Shot Fine-Tuning [20.308785668386424]
We propose an approach that emulates gradient descent without computing gradients, enabling efficient test-time adaptation.<n>Specifically, we formulate gradient descent as an Euler discretization of an ordinary differential equation (ODE) and train an auxiliary network to predict the task-conditional drift.<n>The adaptation then reduces to a simple numerical integration, which requires only a few forward passes of the auxiliary network.
arXiv Detail & Related papers (2025-04-21T03:04:38Z) - Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations [53.180374639531145]
Self-Refining Diffusion Samplers (SRDS) retain sample quality and can improve latency at the cost of additional parallel compute.<n>We take inspiration from the Parareal algorithm, a popular numerical method for parallel-in-time integration of differential equations.
arXiv Detail & Related papers (2024-12-11T11:08:09Z) - Provable Acceleration for Diffusion Models under Minimal Assumptions [8.15094483029656]
We propose a novel training-free acceleration scheme for score-based samplers.<n>Under minimal assumptions, our scheme achieves iterations in total variation within $widetildeO(d5/4/sqrtvarepsilon)$.
arXiv Detail & Related papers (2024-10-30T17:59:06Z) - DC-Solver: Improving Predictor-Corrector Diffusion Sampler via Dynamic Compensation [68.55191764622525]
Diffusion models (DPMs) have shown remarkable performance in visual synthesis but are computationally expensive due to the need for multiple evaluations during the sampling.
Recent predictor synthesis-or diffusion samplers have significantly reduced the required number of evaluations, but inherently suffer from a misalignment issue.
We introduce a new fast DPM sampler called DC-CPRr, which leverages dynamic compensation to mitigate the misalignment.
arXiv Detail & Related papers (2024-09-05T17:59:46Z) - Score Normalization for a Faster Diffusion Exponential Integrator
Sampler [8.914068241467234]
Zhang et al. have proposed the Diffusion Exponential Integrator Sampler (DEIS) for fast generation of samples from Diffusion Models.
Key to this approach is the score function re parameterisation, which reduces the integration error incurred from using a fixed score function estimate.
We find that our score normalisation (DEIS-SN) consistently improves FID compared to vanilla DEIS.
arXiv Detail & Related papers (2023-10-31T21:18:44Z) - Parallel Sampling of Diffusion Models [76.3124029406809]
Diffusion models are powerful generative models but suffer from slow sampling.
We present ParaDiGMS, a novel method to accelerate the sampling of pretrained diffusion models by denoising multiple steps in parallel.
arXiv Detail & Related papers (2023-05-25T17:59:42Z) - Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic
Analysis For DDIM-Type Samplers [90.45898746733397]
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
We show that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current gradient.
arXiv Detail & Related papers (2023-03-06T18:59:19Z) - Borrowing From the Future: Addressing Double Sampling in Model-free
Control [8.282602586225833]
This paper extends the BFF algorithm to action-value function based model-free control.
We prove that BFF is close to unbiased SGD when the underlying dynamics vary slowly with respect to actions.
arXiv Detail & Related papers (2020-06-11T03:50:37Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.