Adaptive Sampling Scheduler
- URL: http://arxiv.org/abs/2509.12569v1
- Date: Tue, 16 Sep 2025 01:51:16 GMT
- Title: Adaptive Sampling Scheduler
- Authors: Qi Wang, Shuliang Zhu, Jinjia Zhou,
- Abstract summary: This paper proposes an adaptive sampling scheduler that is applicable to various consistency distillation frameworks.<n>The scheduler introduces three innovative strategies: (i) dynamic target timestep selection, which adapts to different consistency distillation frameworks by selecting timesteps based on their computed importance; (ii) Optimized alternating sampling along the solution trajectory by guiding forward denoising and backward noise addition based on their computed importance; and (iii) Utilization of smoothing clipping and color balancing techniques to achieve stable and high-quality generation results at high guidance scales.
- Score: 9.416115049578151
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistent distillation methods have evolved into effective techniques that significantly accelerate the sampling process of diffusion models. Although existing methods have achieved remarkable results, the selection of target timesteps during distillation mainly relies on deterministic or stochastic strategies, which often require sampling schedulers to be designed specifically for different distillation processes. Moreover, this pattern severely limits flexibility, thereby restricting the full sampling potential of diffusion models in practical applications. To overcome these limitations, this paper proposes an adaptive sampling scheduler that is applicable to various consistency distillation frameworks. The scheduler introduces three innovative strategies: (i) dynamic target timestep selection, which adapts to different consistency distillation frameworks by selecting timesteps based on their computed importance; (ii) Optimized alternating sampling along the solution trajectory by guiding forward denoising and backward noise addition based on the proposed time step importance, enabling more effective exploration of the solution space to enhance generation performance; and (iii) Utilization of smoothing clipping and color balancing techniques to achieve stable and high-quality generation results at high guidance scales, thereby expanding the applicability of consistency distillation models in complex generation scenarios. We validated the effectiveness and flexibility of the adaptive sampling scheduler across various consistency distillation methods through comprehensive experimental evaluations. Experimental results consistently demonstrated significant improvements in generative performance, highlighting the strong adaptability achieved by our method.
Related papers
- Analyzing and Improving Fast Sampling of Text-to-Image Diffusion Models [32.70019265781621]
Text-to-image diffusion models have achieved unprecedented success but still struggle to produce high-quality images under limited sampling budgets.<n>We propose constant total rotation schedule (TORS) as a scheduling strategy that ensures uniform geometric variation along the sampling trajectory.<n>TORS outperforms previous training-free acceleration methods and produces high-quality images with 10 sampling steps on Flux.1-Dev and Stable Diffusion 3.5.
arXiv Detail & Related papers (2026-02-28T18:09:44Z) - Learning To Sample From Diffusion Models Via Inverse Reinforcement Learning [43.678382510171986]
Diffusion models generate samples through an iterative denoising process, guided by a neural network.<n>We introduce an inverse reinforcement learning framework for learning sampling strategies without retraining the denoiser.<n>We provide experimental evidence that this approach can improve the quality of samples generated by pretrained diffusion models.
arXiv Detail & Related papers (2026-02-09T14:10:44Z) - ReDiF: Reinforced Distillation for Few Step Diffusion [21.686373820429736]
Distillation addresses the slow sampling problem in diffusion models by creating models with smaller size or fewer steps.<n>We propose a reinforcement learning based distillation framework for diffusion models.
arXiv Detail & Related papers (2025-12-28T06:27:24Z) - G$^2$RPO: Granular GRPO for Precise Reward in Flow Models [74.21206048155669]
We propose a novel Granular-GRPO (G$2$RPO) framework that achieves precise and comprehensive reward assessments of sampling directions.<n>We introduce a Multi-Granularity Advantage Integration module that aggregates advantages computed at multiple diffusion scales.<n>Our G$2$RPO significantly outperforms existing flow-based GRPO baselines.
arXiv Detail & Related papers (2025-10-02T12:57:12Z) - Divergence Minimization Preference Optimization for Diffusion Model Alignment [58.651951388346525]
Divergence Minimization Preference Optimization (DMPO) is a principled method for aligning diffusion models by minimizing reverse KL divergence.<n>Our results show that diffusion models fine-tuned with DMPO can consistently outperform or match existing techniques.<n>DMPO unlocks a robust and elegant pathway for preference alignment, bridging principled theory with practical performance in diffusion models.
arXiv Detail & Related papers (2025-07-10T07:57:30Z) - Iterative Distillation for Reward-Guided Fine-Tuning of Diffusion Models in Biomolecular Design [58.8094854658848]
We address the problem of fine-tuning diffusion models for reward-guided generation in biomolecular design.<n>We propose an iterative distillation-based fine-tuning framework that enables diffusion models to optimize for arbitrary reward functions.<n>Our off-policy formulation, combined with KL divergence minimization, enhances training stability and sample efficiency compared to existing RL-based methods.
arXiv Detail & Related papers (2025-07-01T05:55:28Z) - InstaRevive: One-Step Image Enhancement via Dynamic Score Matching [66.97989469865828]
InstaRevive is an image enhancement framework that employs score-based diffusion distillation to harness potent generative capability.<n>Our framework delivers high-quality and visually appealing results across a diverse array of challenging tasks and datasets.
arXiv Detail & Related papers (2025-04-22T01:19:53Z) - Diverse Score Distillation [27.790458964072823]
We propose a score formulation that guides the optimization to follow generation paths defined by random initial seeds.<n>We showcase the applications of our Diverse Score Distillation' (DSD) formulation across tasks such as 2D optimization, text-based 3D inference, and single-view reconstruction.
arXiv Detail & Related papers (2024-12-09T18:59:02Z) - Adaptive Non-Uniform Timestep Sampling for Diffusion Model Training [4.760537994346813]
As data distributions grow more complex, training diffusion models to convergence becomes increasingly intensive.
We introduce a non-uniform timestep sampling method that prioritizes these more critical timesteps.
Our method shows robust performance across various datasets, scheduling strategies, and diffusion architectures.
arXiv Detail & Related papers (2024-11-15T07:12:18Z) - Target-Driven Distillation: Consistency Distillation with Target Timestep Selection and Decoupled Guidance [17.826285840875556]
We introduce Target-Driven Distillation (TDD) to accelerate generative tasks of diffusion models.
TDD adopts delicate selection strategy of target timesteps, increasing the training efficiency.
It can be equipped with non-equidistant sampling and x0 clipping, enabling a more flexible and accurate way for image sampling.
arXiv Detail & Related papers (2024-09-02T16:01:38Z) - Diffusion Models as Constrained Samplers for Optimization with Unknown Constraints [55.39203337683045]
We propose to perform optimization within the data manifold using diffusion models.<n>Depending on the differentiability of the objective function, we propose two different sampling methods.<n>Our method achieves better or comparable performance with previous state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-28T03:09:12Z) - Conditional Variational Diffusion Models [1.8657053208839998]
Inverse problems aim to determine parameters from observations, a crucial task in engineering and science.
We propose a novel approach for learning the variance schedule as part of the training process.
Our method supports probabilistic conditioning on data, provides high-quality solutions, and is flexible, proving able to adapt to different applications with minimum overhead.
arXiv Detail & Related papers (2023-12-04T14:45:56Z) - Optimal Budgeted Rejection Sampling for Generative Models [54.050498411883495]
Rejection sampling methods have been proposed to improve the performance of discriminator-based generative models.
We first propose an Optimal Budgeted Rejection Sampling scheme that is provably optimal.
Second, we propose an end-to-end method that incorporates the sampling scheme into the training procedure to further enhance the model's overall performance.
arXiv Detail & Related papers (2023-11-01T11:52:41Z) - Planning with Diffusion for Flexible Behavior Synthesis [125.24438991142573]
We consider what it would look like to fold as much of the trajectory optimization pipeline as possible into the modeling problem.
The core of our technical approach lies in a diffusion probabilistic model that plans by iteratively denoising trajectories.
arXiv Detail & Related papers (2022-05-20T07:02:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.