Steering Guidance for Personalized Text-to-Image Diffusion Models
- URL: http://arxiv.org/abs/2508.00319v1
- Date: Fri, 01 Aug 2025 05:02:26 GMT
- Title: Steering Guidance for Personalized Text-to-Image Diffusion Models
- Authors: Sunghyun Park, Seokeon Choi, Hyoungwoo Park, Sungrack Yun,
- Abstract summary: Existing sampling guidance methods fail to guide the output toward well-balanced space.<n>We propose personalization guidance, a simple yet effective method leveraging an unlearned weak model conditioned on a null text prompt.<n>Our method explicitly steers the outputs toward a balanced latent space without additional computational overhead.
- Score: 19.550718192994353
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalizing text-to-image diffusion models is crucial for adapting the pre-trained models to specific target concepts, enabling diverse image generation. However, fine-tuning with few images introduces an inherent trade-off between aligning with the target distribution (e.g., subject fidelity) and preserving the broad knowledge of the original model (e.g., text editability). Existing sampling guidance methods, such as classifier-free guidance (CFG) and autoguidance (AG), fail to effectively guide the output toward well-balanced space: CFG restricts the adaptation to the target distribution, while AG compromises text alignment. To address these limitations, we propose personalization guidance, a simple yet effective method leveraging an unlearned weak model conditioned on a null text prompt. Moreover, our method dynamically controls the extent of unlearning in a weak model through weight interpolation between pre-trained and fine-tuned models during inference. Unlike existing guidance methods, which depend solely on guidance scales, our method explicitly steers the outputs toward a balanced latent space without additional computational overhead. Experimental results demonstrate that our proposed guidance can improve text alignment and target distribution fidelity, integrating seamlessly with various fine-tuning strategies.
Related papers
- How Much To Guide: Revisiting Adaptive Guidance in Classifier-Free Guidance Text-to-Vision Diffusion Models [57.42800112251644]
We propose Step AG, which is a simple, universally applicable adaptive guidance strategy.<n>Our evaluations focus on both image quality and image-text alignment.
arXiv Detail & Related papers (2025-06-10T02:09:48Z) - Multimodal LLM-Guided Semantic Correction in Text-to-Image Diffusion [52.315729095824906]
MLLM Semantic-Corrected Ping-Pong-Ahead Diffusion (PPAD) is a novel framework that introduces a Multimodal Large Language Model (MLLM) as a semantic observer during inference.<n>It performs real-time analysis on intermediate generations, identifies latent semantic inconsistencies, and translates feedback into controllable signals that actively guide the remaining denoising steps.<n>Extensive experiments demonstrate PPAD's significant improvements.
arXiv Detail & Related papers (2025-05-26T14:42:35Z) - Regularized Personalization of Text-to-Image Diffusion Models without Distributional Drift [5.608240462042483]
Personalization using text-to-image diffusion models involves adapting a pretrained model to novel subjects with only a few image examples.<n>Forgetting denotes unintended distributional drift, where the model's output distribution deviates from that of the original pretrained model.<n>We propose a new training objective based on a Lipschitz-bounded formulation that explicitly constrains deviation from the pretrained distribution.
arXiv Detail & Related papers (2025-05-26T05:03:59Z) - Diffusion-Based Conditional Image Editing through Optimized Inference with Guidance [46.922018440110826]
We present a training-free approach for text-driven image-to-image translation based on a pretrained text-to-image diffusion model.<n>Our method achieves outstanding image-to-image translation performance on various tasks when combined with the pretrained Stable Diffusion model.
arXiv Detail & Related papers (2024-12-20T11:15:31Z) - DyMO: Training-Free Diffusion Model Alignment with Dynamic Multi-Objective Scheduling [14.621456944266802]
We propose a training-free alignment method, DyMO, for aligning the generated images and human preferences during inference.<n>Apart from text-aware human preference scores, we introduce a semantic alignment objective for enhancing the semantic alignment in the early stages of diffusion.<n>Experiments with diverse pre-trained diffusion models and metrics demonstrate the effectiveness and robustness of the proposed method.
arXiv Detail & Related papers (2024-12-01T10:32:47Z) - Toward Guidance-Free AR Visual Generation via Condition Contrastive Alignment [31.402736873762418]
Motivated by language model alignment methods, we propose textitCondition Contrastive Alignment (CCA) to facilitate guidance-free AR visual generation with high performance.
Experimental results show that CCA can significantly enhance the guidance-free performance of all tested models with just one epoch fine-tuning.
This experimentally confirms the strong theoretical connection between language-targeted alignment and visual-targeted guidance methods.
arXiv Detail & Related papers (2024-10-12T03:31:25Z) - Phasic Content Fusing Diffusion Model with Directional Distribution
Consistency for Few-Shot Model Adaption [73.98706049140098]
We propose a novel phasic content fusing few-shot diffusion model with directional distribution consistency loss.
Specifically, we design a phasic training strategy with phasic content fusion to help our model learn content and style information when t is large.
Finally, we propose a cross-domain structure guidance strategy that enhances structure consistency during domain adaptation.
arXiv Detail & Related papers (2023-09-07T14:14:11Z) - Improving Diversity in Zero-Shot GAN Adaptation with Semantic Variations [61.132408427908175]
zero-shot GAN adaptation aims to reuse well-trained generators to synthesize images of an unseen target domain.
With only a single representative text feature instead of real images, the synthesized images gradually lose diversity.
We propose a novel method to find semantic variations of the target text in the CLIP space.
arXiv Detail & Related papers (2023-08-21T08:12:28Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - Few Shot Generative Model Adaption via Relaxed Spatial Structural
Alignment [130.84010267004803]
Training a generative adversarial network (GAN) with limited data has been a challenging task.
A feasible solution is to start with a GAN well-trained on a large scale source domain and adapt it to the target domain with a few samples, termed as few shot generative model adaption.
We propose a relaxed spatial structural alignment method to calibrate the target generative models during the adaption.
arXiv Detail & Related papers (2022-03-06T14:26:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.