Recommendation as Language Processing (RLP): A Unified Pretrain,
Personalized Prompt & Predict Paradigm (P5)
- URL: http://arxiv.org/abs/2203.13366v1
- Date: Thu, 24 Mar 2022 22:13:23 GMT
- Title: Recommendation as Language Processing (RLP): A Unified Pretrain,
Personalized Prompt & Predict Paradigm (P5)
- Authors: Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, Yongfeng Zhang
- Abstract summary: We present a flexible and unified text-to-text paradigm called "Pretrain, Personalized Prompt, and Predict Paradigm" (P5) for recommendation.
All data such as user-item interactions, item metadata, and user reviews are converted to a common format -- natural language sequences.
P5 learns different tasks with the same language modeling objective during pretraining.
- Score: 41.57432785137957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: For a long period, different recommendation tasks typically require designing
task-specific architectures and training objectives. As a result, it is hard to
transfer the learned knowledge and representations from one task to another,
thus restricting the generalization ability of existing recommendation
approaches, e.g., a sequential recommendation model can hardly be applied or
transferred to a review generation method. To deal with such issues,
considering that language grounding is a powerful medium to describe and
represent various problems or tasks, we present a flexible and unified
text-to-text paradigm called "Pretrain, Personalized Prompt, and Predict
Paradigm" (P5) for recommendation, which unifies various recommendation tasks
in a shared framework. In P5, all data such as user-item interactions, item
metadata, and user reviews are converted to a common format -- natural language
sequences. The rich information from natural language assist P5 to capture
deeper semantics for recommendation. P5 learns different tasks with the same
language modeling objective during pretraining. Thus, it possesses the
potential to serve as the foundation model for downstream recommendation tasks,
allows easy integration with other modalities, and enables instruction-based
recommendation, which will revolutionize the technical form of recommender
system towards unified recommendation engine. With adaptive personalized prompt
for different users, P5 is able to make predictions in a zero-shot or few-shot
manner and largely reduces the necessity for extensive fine-tuning. On several
recommendation benchmarks, we conduct experiments to show the effectiveness of
our generative approach. We will release our prompts and pretrained P5 language
model to help advance future research on Recommendation as Language Processing
(RLP) and Personalized Foundation Models.
Related papers
- MetaKP: On-Demand Keyphrase Generation [52.48698290354449]
We introduce on-demand keyphrase generation, a novel paradigm that requires keyphrases that conform to specific high-level goals or intents.
We present MetaKP, a large-scale benchmark comprising four datasets, 7500 documents, and 3760 goals across news and biomedical domains with human-annotated keyphrases.
We demonstrate the potential of our method to serve as a general NLP infrastructure, exemplified by its application in epidemic event detection from social media.
arXiv Detail & Related papers (2024-06-28T19:02:59Z) - Evaluating Large Language Models as Generative User Simulators for Conversational Recommendation [20.171574438536673]
We introduce a new protocol to measure the degree to which language models can accurately emulate human behavior in conversational recommendation.
We demonstrate these tasks effectively reveal deviations of language models from human behavior, and offer insights on how to reduce the deviations with model selection and prompting strategies.
arXiv Detail & Related papers (2024-03-13T18:16:21Z) - Uncertainty-Aware Explainable Recommendation with Large Language Models [15.229417987212631]
We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
arXiv Detail & Related papers (2024-01-31T14:06:26Z) - Reformulating Sequential Recommendation: Learning Dynamic User Interest with Content-enriched Language Modeling [18.297332953450514]
We propose LANCER, which leverages the semantic understanding capabilities of pre-trained language models to generate personalized recommendations.
Our approach bridges the gap between language models and recommender systems, resulting in more human-like recommendations.
arXiv Detail & Related papers (2023-09-19T08:54:47Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - VIP5: Towards Multimodal Foundation Models for Recommendation [47.32368265586631]
We propose the development of a multimodal foundation model (MFM) to unify various modalities and recommendation tasks.
To achieve this, we introduce multimodal personalized prompts to accommodate multiple modalities under a shared format.
We also propose a parameter-efficient training method for foundation models, which involves freezing the P5 backbone and fine-tuning lightweight adapters.
arXiv Detail & Related papers (2023-05-23T17:43:46Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - TuringAdvice: A Generative and Dynamic Evaluation of Language Use [90.3029315711237]
We propose TuringAdvice, a new challenge task and dataset for language understanding models.
Given a written situation that a real person is currently facing, a model must generate helpful advice in natural language.
Empirical results show that today's models struggle at TuringAdvice.
arXiv Detail & Related papers (2020-04-07T18:00:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.