Toward Early Quality Assessment of Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2603.02829v2
- Date: Wed, 04 Mar 2026 04:54:11 GMT
- Title: Toward Early Quality Assessment of Text-to-Image Diffusion Models
- Authors: Huanlei Guo, Hongxin Wei, Bingyi Jing,
- Abstract summary: Recent text-to-image (T2I) diffusion and flow-matching models can produce highly realistic images from natural language prompts.<n>Probe-Select is a plug-in module that enables efficient evaluation of image quality within the generation process.
- Score: 23.306273801765297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent text-to-image (T2I) diffusion and flow-matching models can produce highly realistic images from natural language prompts. In practical scenarios, T2I systems are often run in a ``generate--then--select'' mode: many seeds are sampled and only a few images are kept for use. However, this pipeline is highly resource-intensive since each candidate requires tens to hundreds of denoising steps, and evaluation metrics such as CLIPScore and ImageReward are post-hoc. In this work, we address this inefficiency by introducing Probe-Select, a plug-in module that enables efficient evaluation of image quality within the generation process. We observe that certain intermediate denoiser activations, even at early timesteps, encode a stable coarse structure, object layout and spatial arrangement--that strongly correlates with final image fidelity. Probe-Select exploits this property by predicting final quality scores directly from early activations, allowing unpromising seeds to be terminated early. Across diffusion and flow-matching backbones, our experiments show that early evaluation at only 20\% of the trajectory accurately ranks candidate seeds and enables selective continuation. This strategy reduces sampling cost by over 60\% while improving the quality of the retained images, demonstrating that early structural signals can effectively guide selective generation without altering the underlying generative model. Code is available at https://github.com/Guhuary/ProbeSelect.
Related papers
- Diffusion Probe: Generated Image Result Prediction Using CNN Probes [33.97515945308048]
Text-to-image (T2I) diffusion models lack an efficient mechanism for early quality assessment.<n>We introduce Diffusion Probe, a framework that leverages internal cross-attention maps as predictive signals.
arXiv Detail & Related papers (2026-02-27T08:24:47Z) - TTSnap: Test-Time Scaling of Diffusion Models via Noise-Aware Pruning [53.52543819839442]
A prominent approach to test-time scaling for text-to-image diffusion models formulates the problem as a search over multiple noise seeds.<n>We propose test-time scaling with noise-aware pruning (TTSnap), a framework that prunes low-quality candidates without fully denoising them.
arXiv Detail & Related papers (2025-11-27T09:14:26Z) - ProxT2I: Efficient Reward-Guided Text-to-Image Generation via Proximal Diffusion [18.25085327318649]
We develop a text-to-image (T2I) diffusion model based on backward discretizations, dubbed ProxT2I, relying on learned and conditional proximal operators instead of score functions.<n>We develop a new large-scale and open-source dataset comprising 15 million high-quality human images with fine-grained captions, called LAION-Face-T2I-15M, for training and evaluation.
arXiv Detail & Related papers (2025-11-24T04:10:53Z) - Towards Generalizable AI-Generated Image Detection via Image-Adaptive Prompt Learning [33.269644831847636]
Image-Adaptive Prompt Learning (IAPL) is a novel paradigm that adjusts the prompts according to each input image, rather than fixing them after training.<n>IAPL achieves state-of-the-art performance, with mean accuracies of 95.61% and 96.7% on the widely used UniversalFakeDetect and GenImage datasets.
arXiv Detail & Related papers (2025-08-03T05:41:24Z) - Time Step Generating: A Universal Synthesized Deepfake Image Detector [0.4488895231267077]
We propose a universal synthetic image detector Time Step Generating (TSG)
TSG does not rely on pre-trained models' reconstructing ability, specific datasets, or sampling algorithms.
We test the proposed TSG on the large-scale GenImage benchmark and it achieves significant improvements in both accuracy and generalizability.
arXiv Detail & Related papers (2024-11-17T09:39:50Z) - Fast constrained sampling in pre-trained diffusion models [80.99262780028015]
We propose an algorithm that enables fast, high-quality generation under arbitrary constraints.<n>Our approach produces results that rival or surpass the state-of-the-art training-free inference methods.
arXiv Detail & Related papers (2024-10-24T14:52:38Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - Diffusion Posterior Proximal Sampling for Image Restoration [27.35952624032734]
We present a refined paradigm for diffusion-based image restoration.
Specifically, we opt for a sample consistent with the measurement identity at each generative step.
The number of candidate samples used for selection is adaptively determined based on the signal-to-noise ratio of the timestep.
arXiv Detail & Related papers (2024-02-25T04:24:28Z) - If at First You Don't Succeed, Try, Try Again: Faithful Diffusion-based
Text-to-Image Generation by Selection [53.320946030761796]
diffusion-based text-to-image (T2I) models can lack faithfulness to the text prompt.
We show that large T2I diffusion models are more faithful than usually assumed, and can generate images faithful to even complex prompts.
We introduce a pipeline that generates candidate images for a text prompt and picks the best one according to an automatic scoring system.
arXiv Detail & Related papers (2023-05-22T17:59:41Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Which Pixel to Annotate: a Label-Efficient Nuclei Segmentation Framework [70.18084425770091]
Deep neural networks have been widely applied in nuclei instance segmentation of H&E stained pathology images.
It is inefficient and unnecessary to label all pixels for a dataset of nuclei images which usually contain similar and redundant patterns.
We propose a novel full nuclei segmentation framework that chooses only a few image patches to be annotated, augments the training set from the selected samples, and achieves nuclei segmentation in a semi-supervised manner.
arXiv Detail & Related papers (2022-12-20T14:53:26Z) - Lafite2: Few-shot Text-to-Image Generation [132.14211027057766]
We propose a novel method for pre-training text-to-image generation model on image-only datasets.
It considers a retrieval-then-optimization procedure to synthesize pseudo text features.
It can be beneficial to a wide range of settings, including the few-shot, semi-supervised and fully-supervised learning.
arXiv Detail & Related papers (2022-10-25T16:22:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.