Prompt-aware classifier free guidance for diffusion models
- URL: http://arxiv.org/abs/2509.22728v2
- Date: Sun, 05 Oct 2025 11:32:52 GMT
- Title: Prompt-aware classifier free guidance for diffusion models
- Authors: Xuanhao Zhang, Chang Li,
- Abstract summary: We introduce a prompt-aware framework that predicts scale-dependent quality and selects the optimal guidance at inference.<n>A lightweight predictor, conditioned on semantic embeddings and linguistic complexity, estimates multi-metric quality curves.<n>Experiments on MSCOCO2014 and AudioCaps show consistent improvements over vanilla CFG.
- Score: 3.3115063666033167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Diffusion models have achieved remarkable progress in image and audio generation, largely due to Classifier-Free Guidance. However, the choice of guidance scale remains underexplored: a fixed scale often fails to generalize across prompts of varying complexity, leading to oversaturation or weak alignment. We address this gap by introducing a prompt-aware framework that predicts scale-dependent quality and selects the optimal guidance at inference. Specifically, we construct a large synthetic dataset by generating samples under multiple scales and scoring them with reliable evaluation metrics. A lightweight predictor, conditioned on semantic embeddings and linguistic complexity, estimates multi-metric quality curves and determines the best scale via a utility function with regularization. Experiments on MSCOCO~2014 and AudioCaps show consistent improvements over vanilla CFG, enhancing fidelity, alignment, and perceptual preference. This work demonstrates that prompt-aware scale selection provides an effective, training-free enhancement for pretrained diffusion backbones.
Related papers
- Benchmarking Few-shot Transferability of Pre-trained Models with Improved Evaluation Protocols [123.73663884421272]
Few-shot transfer has been revolutionized by stronger pre-trained models and improved adaptation algorithms.<n>We establish FEWTRANS, a comprehensive benchmark containing 10 diverse datasets.<n>By releasing FEWTRANS, we aim to provide a rigorous "ruler" to streamline reproducible advances in few-shot transfer learning research.
arXiv Detail & Related papers (2026-02-28T05:41:57Z) - HiGFA: Hierarchical Guidance for Fine-grained Data Augmentation with Diffusion Models [82.10385962490051]
Generative diffusion models show promise for data augmentation.<n>Applying them to fine-grained tasks presents a significant challenge.<n>HiGFA is a hierarchical, confidence-driven orchestration that generates diverse yet faithful synthetic images.
arXiv Detail & Related papers (2025-11-16T10:46:16Z) - Enhancing Diffusion Model Guidance through Calibration and Regularization [9.22066257345387]
This paper introduces two complementary contributions to address this issue.<n>First, we propose a differentiable calibration objective based on the Smooth Expected Error (Smooth ECE)<n>Second, we develop enhanced sampling guidance methods that operate on off-the-shelf classifiers without requiring retraining.
arXiv Detail & Related papers (2025-11-08T04:23:42Z) - Plug-and-Play Prompt Refinement via Latent Feedback for Diffusion Model Alignment [54.17386822940477]
We introduce PromptLoop, a plug-and-play reinforcement learning framework that incorporates latent feedback into step-wise prompt refinement.<n>This design achieves a structural analogy to the Diffusion RL approach, while retaining the flexibility and generality of prompt-based alignment.
arXiv Detail & Related papers (2025-10-01T02:18:58Z) - Dynamic Classifier-Free Diffusion Guidance via Online Feedback [53.54876309092376]
"One-size-all" approach fails to adapt to the diverse requirements of different prompts.<n>We introduce a framework for dynamic CFG scheduling.<n>We demonstrate the effectiveness of our approach on both small-scale models and the state-of-the-art Imagen 3.
arXiv Detail & Related papers (2025-09-19T16:27:19Z) - Diffusion Classifier Guidance for Non-robust Classifiers [0.5999777817331317]
We study the sensitivity of general, non-robust, and robust classifiers to noise of the diffusion process.<n>Non-robust classifiers exhibit significant accuracy degradation under noisy conditions, leading to unstable guidance gradients.<n>We propose a method that utilizes one-step denoised image predictions and implements techniques inspired by optimization methods.
arXiv Detail & Related papers (2025-07-01T11:39:41Z) - Feedback Guidance of Diffusion Models [0.0]
Interval-Free Guidance (CFG) has become standard for improving sample fidelity in conditional diffusion models.<n>We propose FeedBack Guidance (FBG), which uses a state-dependent coefficient to self-regulate guidance amounts based on need.
arXiv Detail & Related papers (2025-06-06T13:46:32Z) - Foster Adaptivity and Balance in Learning with Noisy Labels [26.309508654960354]
We propose a novel approach named textbfSED to deal with label noise in a textbfSelf-adaptivtextbfE and class-balancetextbfD manner.
A mean-teacher model is then employed to correct labels of noisy samples.
We additionally propose a self-adaptive and class-balanced sample re-weighting mechanism to assign different weights to detected noisy samples.
arXiv Detail & Related papers (2024-07-03T03:10:24Z) - DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification [55.306583814017046]
We present a novel difficulty-aware semantic augmentation (DASA) approach for speaker verification.
DASA generates diversified training samples in speaker embedding space with negligible extra computing cost.
The best result achieves a 14.6% relative reduction in EER metric on CN-Celeb evaluation set.
arXiv Detail & Related papers (2023-10-18T17:07:05Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Boosting Differentiable Causal Discovery via Adaptive Sample Reweighting [62.23057729112182]
Differentiable score-based causal discovery methods learn a directed acyclic graph from observational data.
We propose a model-agnostic framework to boost causal discovery performance by dynamically learning the adaptive weights for the Reweighted Score function, ReScore.
arXiv Detail & Related papers (2023-03-06T14:49:59Z) - Multi-Scale Positive Sample Refinement for Few-Shot Object Detection [61.60255654558682]
Few-shot object detection (FSOD) helps detectors adapt to unseen classes with few training instances.
We propose a Multi-scale Positive Sample Refinement (MPSR) approach to enrich object scales in FSOD.
MPSR generates multi-scale positive samples as object pyramids and refines the prediction at various scales.
arXiv Detail & Related papers (2020-07-18T09:48:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.