Generate, Not Recommend: Personalized Multimodal Content Generation
- URL: http://arxiv.org/abs/2506.01704v1
- Date: Mon, 02 Jun 2025 14:10:08 GMT
- Title: Generate, Not Recommend: Personalized Multimodal Content Generation
- Authors: Jiongnan Liu, Zhicheng Dou, Ning Hu, Chenyan Xiong,
- Abstract summary: We propose a new paradigm that goes beyond content filtering and selecting to directly generate personalized items in a multimodal form.<n>We leverage any-to-any Large Multimodal Models (LMMs) and train them in both supervised fine-tuning and online reinforcement learning strategy.<n>Experiments on two benchmark datasets and user study confirm the efficacy of the proposed method.
- Score: 34.02112521797116
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: To address the challenge of information overload from massive web contents, recommender systems are widely applied to retrieve and present personalized results for users. However, recommendation tasks are inherently constrained to filtering existing items and lack the ability to generate novel concepts, limiting their capacity to fully satisfy user demands and preferences. In this paper, we propose a new paradigm that goes beyond content filtering and selecting: directly generating personalized items in a multimodal form, such as images, tailored to individual users. To accomplish this, we leverage any-to-any Large Multimodal Models (LMMs) and train them in both supervised fine-tuning and online reinforcement learning strategy to equip them with the ability to yield tailored next items for users. Experiments on two benchmark datasets and user study confirm the efficacy of the proposed method. Notably, the generated images not only align well with users' historical preferences but also exhibit relevance to their potential future interests.
Related papers
- Synergizing Implicit and Explicit User Interests: A Multi-Embedding Retrieval Framework at Pinterest [9.904093205817247]
The retrieval stage plays a critical role in generating a high-recall set of candidate items.<n>Traditional two-tower models struggle in this regard due to limited user-item feature interaction.<n>We propose a novel multi-embedding retrieval framework designed to enhance user interest representation.
arXiv Detail & Related papers (2025-06-29T02:14:21Z) - Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - Enhancing User Intent for Recommendation Systems via Large Language Models [0.0]
DUIP is a novel framework that combines LSTM networks with Large Language Models (LLMs) to dynamically capture user intent and generate personalized item recommendations.<n>Our findings suggest that DUIP is a promising approach for next-generation recommendation systems, with potential for further improvements in cross-modal recommendations and scalability.
arXiv Detail & Related papers (2025-01-18T20:35:03Z) - EmbSum: Leveraging the Summarization Capabilities of Large Language Models for Content-Based Recommendations [38.44534579040017]
We introduce EmbSum, a framework that enables offline pre-computations of users and candidate items.
The model's ability to generate summaries of user interests serves as a valuable by-product, enhancing its usefulness for personalized content recommendations.
arXiv Detail & Related papers (2024-05-19T04:31:54Z) - MMGRec: Multimodal Generative Recommendation with Transformer Model [81.61896141495144]
MMGRec aims to introduce a generative paradigm into multimodal recommendation.
We first devise a hierarchical quantization method Graph CF-RQVAE to assign Rec-ID for each item from its multimodal information.
We then train a Transformer-based recommender to generate the Rec-IDs of user-preferred items based on historical interaction sequences.
arXiv Detail & Related papers (2024-04-25T12:11:27Z) - Towards Unified Multi-Modal Personalization: Large Vision-Language Models for Generative Recommendation and Beyond [87.1712108247199]
Our goal is to establish a Unified paradigm for Multi-modal Personalization systems (UniMP)
We develop a generic and personalization generative framework, that can handle a wide range of personalized needs.
Our methodology enhances the capabilities of foundational language models for personalized tasks.
arXiv Detail & Related papers (2024-03-15T20:21:31Z) - InteraRec: Screenshot Based Recommendations Using Multimodal Large Language Models [0.6926105253992517]
We introduce a sophisticated and interactive recommendation framework denoted as InteraRec.
InteraRec captures high-frequency screenshots of web pages as users navigate through a website.
We demonstrate the effectiveness of InteraRec in providing users with valuable and personalized offerings.
arXiv Detail & Related papers (2024-02-26T17:47:57Z) - Ada-Retrieval: An Adaptive Multi-Round Retrieval Paradigm for Sequential
Recommendations [50.03560306423678]
We propose Ada-Retrieval, an adaptive multi-round retrieval paradigm for recommender systems.
Ada-Retrieval iteratively refines user representations to better capture potential candidates in the full item space.
arXiv Detail & Related papers (2024-01-12T15:26:40Z) - PURS: Personalized Unexpected Recommender System for Improving User
Satisfaction [76.98616102965023]
We describe a novel Personalized Unexpected Recommender System (PURS) model that incorporates unexpectedness into the recommendation process.
Extensive offline experiments on three real-world datasets illustrate that the proposed PURS model significantly outperforms the state-of-the-art baseline approaches.
arXiv Detail & Related papers (2021-06-05T01:33:21Z) - Controllable Multi-Interest Framework for Recommendation [64.30030600415654]
We formalize the recommender system as a sequential recommendation problem.
We propose a novel controllable multi-interest framework for the sequential recommendation, called ComiRec.
Our framework has been successfully deployed on the offline Alibaba distributed cloud platform.
arXiv Detail & Related papers (2020-05-19T10:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.