Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Preference Understanding
- URL: http://arxiv.org/abs/2504.18204v1
- Date: Fri, 25 Apr 2025 09:35:02 GMT
- Title: Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Preference Understanding
- Authors: Kun Li, Jianhui Wang, Yangfan He, Xinyuan Song, Ruoyu Wang, Hongyang He, Wenxin Zhang, Jiaqi Chen, Keqin Li, Sida Li, Miao Zhang, Tianyu Shi, Xueqian Wang,
- Abstract summary: We present a framework incorporating human-in-the-loop feedback, leveraging a well-trained reward model aligned with user preferences.<n>Our approach consistently surpasses competing models in user satisfaction, especially in multi-turn dialogue scenarios.
- Score: 29.191627597682597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative AI has significantly changed industries by enabling text-driven image generation, yet challenges remain in achieving high-resolution outputs that align with fine-grained user preferences. Consequently, multi-round interactions are necessary to ensure the generated images meet expectations. Previous methods enhanced prompts via reward feedback but did not optimize over a multi-round dialogue dataset. In this work, we present a Visual Co-Adaptation (VCA) framework incorporating human-in-the-loop feedback, leveraging a well-trained reward model aligned with human preferences. Using a diverse multi-turn dialogue dataset, our framework applies multiple reward functions, such as diversity, consistency, and preference feedback, while fine-tuning the diffusion model through LoRA, thus optimizing image generation based on user input. We also construct multi-round dialogue datasets of prompts and image pairs aligned with user intent. Experiments demonstrate that our method outperforms state-of-the-art baselines, significantly improving image consistency and alignment with user intent. Our approach consistently surpasses competing models in user satisfaction, especially in multi-turn dialogue scenarios.
Related papers
- OMR-Diffusion:Optimizing Multi-Round Enhanced Training in Diffusion Models for Improved Intent Understanding [21.101906599201314]
We present a Visual Co-Adaptation framework that incorporates human-in-the-loop feedback.<n>The framework applies multiple reward functions (such as diversity, consistency, and preference feedback) to refine the diffusion model.<n> Experiments show the model achieves 508 wins in human evaluation, outperforming DALL-E 3 (463 wins) and others.
arXiv Detail & Related papers (2025-03-22T06:10:57Z) - Unified Reward Model for Multimodal Understanding and Generation [32.22714522329413]
This paper proposes UnifiedReward, the first unified reward model for multimodal understanding and generation assessment.<n>We first develop UnifiedReward on our constructed large-scale human preference dataset, including both image and video generation/understanding tasks.
arXiv Detail & Related papers (2025-03-07T08:36:05Z) - Enhancing Intent Understanding for Ambiguous prompt: A Human-Machine Co-Adaption Strategy [28.647935556492957]
We propose a human-machine co-adaption strategy using mutual information between the user's prompts and the pictures under modification.<n>We find that an improved model can reduce the necessity for multiple rounds of adjustments.
arXiv Detail & Related papers (2025-01-25T10:32:00Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.<n>With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.<n>Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - LoRACLR: Contrastive Adaptation for Customization of Diffusion Models [62.70911549650579]
LoRACLR is a novel approach for multi-concept image generation that merges multiple LoRA models, each fine-tuned for a distinct concept, into a single, unified model.
LoRACLR uses a contrastive objective to align and merge the weight spaces of these models, ensuring compatibility while minimizing interference.
Our results highlight the effectiveness of LoRACLR in accurately merging multiple concepts, advancing the capabilities of personalized image generation.
arXiv Detail & Related papers (2024-12-12T18:59:55Z) - MDAP: A Multi-view Disentangled and Adaptive Preference Learning Framework for Cross-Domain Recommendation [63.27390451208503]
Cross-domain Recommendation systems leverage multi-domain user interactions to improve performance.
We propose the Multi-view Disentangled and Adaptive Preference Learning framework.
Our framework uses a multiview encoder to capture diverse user preferences.
arXiv Detail & Related papers (2024-10-08T10:06:45Z) - Reflective Human-Machine Co-adaptation for Enhanced Text-to-Image Generation Dialogue System [7.009995656535664]
We propose a reflective human-machine co-adaptation strategy, named RHM-CAS.
externally, the Agent engages in meaningful language interactions with users to reflect on and refine the generated images.
Internally, the Agent tries to optimize the policy based on user preferences, ensuring that the final outcomes closely align with user preferences.
arXiv Detail & Related papers (2024-08-27T18:08:00Z) - Multimodal Large Language Model is a Human-Aligned Annotator for Text-to-Image Generation [87.50120181861362]
VisionPrefer is a high-quality and fine-grained preference dataset that captures multiple preference aspects.
We train a reward model VP-Score over VisionPrefer to guide the training of text-to-image generative models and the preference prediction accuracy of VP-Score is comparable to human annotators.
arXiv Detail & Related papers (2024-04-23T14:53:15Z) - DialCLIP: Empowering CLIP as Multi-Modal Dialog Retriever [83.33209603041013]
We propose a parameter-efficient prompt-tuning method named DialCLIP for multi-modal dialog retrieval.
Our approach introduces a multi-modal context generator to learn context features which are distilled into prompts within the pre-trained vision-language model CLIP.
To facilitate various types of retrieval, we also design multiple experts to learn mappings from CLIP outputs to multi-modal representation space.
arXiv Detail & Related papers (2024-01-02T07:40:12Z) - A Generic Approach for Enhancing GANs by Regularized Latent Optimization [79.00740660219256]
We introduce a generic framework called em generative-model inference that is capable of enhancing pre-trained GANs effectively and seamlessly.
Our basic idea is to efficiently infer the optimal latent distribution for the given requirements using Wasserstein gradient flow techniques.
arXiv Detail & Related papers (2021-12-07T05:22:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.