Learning User Preferences for Image Generation Model
- URL: http://arxiv.org/abs/2508.08220v1
- Date: Mon, 11 Aug 2025 17:39:42 GMT
- Title: Learning User Preferences for Image Generation Model
- Authors: Wenyi Mo, Ying Ba, Tianyu Zhang, Yalong Bai, Biye Li,
- Abstract summary: We propose an approach built upon Multimodal Large Language Models to learn personalized user preferences.<n>The contrastive preference loss is designed to effectively distinguish between user ''likes'' and ''dislikes''<n>The learnable preference tokens capture shared interest representations among existing users, enabling the model to activate group-specific preferences and enhance consistency across similar users.
- Score: 15.884017849539754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: User preference prediction requires a comprehensive and accurate understanding of individual tastes. This includes both surface-level attributes, such as color and style, and deeper content-related aspects, such as themes and composition. However, existing methods typically rely on general human preferences or assume static user profiles, often neglecting individual variability and the dynamic, multifaceted nature of personal taste. To address these limitations, we propose an approach built upon Multimodal Large Language Models, introducing contrastive preference loss and preference tokens to learn personalized user preferences from historical interactions. The contrastive preference loss is designed to effectively distinguish between user ''likes'' and ''dislikes'', while the learnable preference tokens capture shared interest representations among existing users, enabling the model to activate group-specific preferences and enhance consistency across similar users. Extensive experiments demonstrate our model outperforms other methods in preference prediction accuracy, effectively identifying users with similar aesthetic inclinations and providing more precise guidance for generating images that align with individual tastes. The project page is \texttt{https://learn-user-pref.github.io/}.
Related papers
- Synthetic Interaction Data for Scalable Personalization in Large Language Models [67.31884245564086]
We introduce a high-fidelity synthetic data generation framework called PersonaGym.<n>Unlike prior work that treats personalization as static persona-preference pairs, PersonaGym models a dynamic preference process.<n>We release PersonaAtlas, a large-scale, high-quality, and diverse synthetic dataset of high-fidelity multi-turn personalized interaction trajectories.
arXiv Detail & Related papers (2026-02-12T20:41:22Z) - PreferThinker: Reasoning-based Personalized Image Preference Assessment [83.66114370585976]
We propose a reasoning-based personalized image preference assessment framework.<n>It first predicts a user's preference profile from reference images.<n>It then provides interpretable, multi-dimensional scores and assessments of candidate images.
arXiv Detail & Related papers (2025-11-01T16:19:51Z) - PrefPalette: Personalized Preference Modeling with Latent Attributes [59.58648056175468]
PrefPalette is a framework that decomposes preferences into attribute dimensions.<n>It tailors its preference prediction to distinct social community values.<n>PrefPalette outperforms GPT-4o by 46.6% in average prediction accuracy.
arXiv Detail & Related papers (2025-07-17T21:21:54Z) - NextQuill: Causal Preference Modeling for Enhancing LLM Personalization [82.15961484963256]
We introduce NextQuill, a novel personalization framework grounded in causal preference modeling.<n>Building on this insight, NextQuill introduces two complementary alignment strategies.<n> Experiments across multiple personalization benchmarks demonstrate that NextQuill significantly improves personalization quality.
arXiv Detail & Related papers (2025-06-03T02:08:55Z) - WikiPersonas: What Can We Learn From Personalized Alignment to Famous People? [14.801237597577169]
We introduce WikiPersona: the first fine-grained personalization using well-documented, famous individuals.<n>We evaluate different personalization approaches and find that using textitinferred personal preferences as prefixes enables effective personalization.
arXiv Detail & Related papers (2025-05-19T15:39:48Z) - Personalized Preference Fine-tuning of Diffusion Models [75.22218338096316]
We introduce PPD, a multi-reward optimization objective that aligns diffusion models with personalized preferences.<n>With PPD, a diffusion model learns the individual preferences of a population of users in a few-shot way.<n>Our approach achieves an average win rate of 76% over Stable Cascade, generating images that more accurately reflect specific user preferences.
arXiv Detail & Related papers (2025-01-11T22:38:41Z) - ComPO: Community Preferences for Language Model Personalization [122.54846260663922]
ComPO is a method to personalize preference optimization in language models.
We collect and release ComPRed, a question answering dataset with community-level preferences from Reddit.
arXiv Detail & Related papers (2024-10-21T14:02:40Z) - ViPer: Visual Personalization of Generative Models via Individual Preference Learning [11.909247529297678]
We propose to personalize the image generation process by capturing the generic preferences of the user in a one-time process.
Based on these comments, we infer a user's structured liked and disliked visual attributes.
These attributes are used to guide a text-to-image model toward producing images that are tuned towards the individual user's visual preference.
arXiv Detail & Related papers (2024-07-24T15:42:34Z) - Learning User Embeddings from Human Gaze for Personalised Saliency Prediction [12.361829928359136]
We present a novel method to extract user embeddings from pairs of natural images and corresponding saliency maps.
At the core of our method is a Siamese convolutional neural encoder that learns the user embeddings by contrasting the image and personal saliency map pairs of different users.
arXiv Detail & Related papers (2024-03-20T14:58:40Z) - Personalized Language Modeling from Personalized Human Feedback [45.16986573937782]
Personalized large language models (LLMs) are designed to tailor responses to individual user preferences.<n>We propose Personalized-RLHF, an efficient framework that utilizes a lightweight user model to capture individual user preferences.<n>We show that personalized LLMs trained using P-RLHF generate responses that are more closely aligned with individual user preferences.
arXiv Detail & Related papers (2024-02-06T04:18:58Z) - PR-Net: Preference Reasoning for Personalized Video Highlight Detection [34.71807317380797]
We propose a simple yet efficient preference reasoning framework (PR-Net) to explicitly take the diverse interests into account for frame-level highlight prediction.
Our method significantly outperforms state-of-the-art methods with a relative improvement of 12% in mean accuracy precision.
arXiv Detail & Related papers (2021-09-04T06:12:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.