Zero-Shot Next-Item Recommendation using Large Pretrained Language
Models
- URL: http://arxiv.org/abs/2304.03153v1
- Date: Thu, 6 Apr 2023 15:35:11 GMT
- Title: Zero-Shot Next-Item Recommendation using Large Pretrained Language
Models
- Authors: Lei Wang, Ee-Peng Lim
- Abstract summary: We propose a prompting strategy called Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make next-item recommendations.
Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences.
We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance.
- Score: 16.14557830316297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large language models (LLMs) have achieved impressive zero-shot performance
in various natural language processing (NLP) tasks, demonstrating their
capabilities for inference without training examples. Despite their success, no
research has yet explored the potential of LLMs to perform next-item
recommendations in the zero-shot setting. We have identified two major
challenges that must be addressed to enable LLMs to act effectively as
recommenders. First, the recommendation space can be extremely large for LLMs,
and LLMs do not know about the target user's past interacted items and
preferences. To address this gap, we propose a prompting strategy called
Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make
next-item recommendations. Specifically, the NIR-based strategy involves using
an external module to generate candidate items based on user-filtering or
item-filtering. Our strategy incorporates a 3-step prompting that guides GPT-3
to carry subtasks that capture the user's preferences, select representative
previously watched movies, and recommend a ranked list of 10 movies. We
evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show
that it achieves strong zero-shot performance, even outperforming some strong
sequential recommendation models trained on the entire training dataset. These
promising results highlight the ample research opportunities to use LLMs as
recommenders. The code can be found at
https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec.
Related papers
- HLLM: Enhancing Sequential Recommendations via Hierarchical Large Language Models for Item and User Modeling [21.495443162191332]
Large Language Models (LLMs) have achieved remarkable success in various fields, prompting several studies to explore their potential in recommendation systems.
We propose a novel Hierarchical Large Language Model (HLLM) architecture designed to enhance sequential recommendation systems.
HLLM achieves excellent scalability, with the largest configuration utilizing 7B parameters for both item feature extraction and user interest modeling.
arXiv Detail & Related papers (2024-09-19T13:03:07Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - Improve Temporal Awareness of LLMs for Sequential Recommendation [61.723928508200196]
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks.
LLMs fall short in recognizing and utilizing temporal information, rendering poor performance in tasks that require an understanding of sequential data.
We propose three prompting strategies to exploit temporal information within historical interactions for LLM-based sequential recommendation.
arXiv Detail & Related papers (2024-05-05T00:21:26Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - ReLLa: Retrieval-enhanced Large Language Models for Lifelong Sequential Behavior Comprehension in Recommendation [43.270424225285105]
We focus on adapting and empowering a pure large language model for zero-shot and few-shot recommendation tasks.
We propose Retrieval-enhanced Large Language models (ReLLa) for recommendation tasks in both zero-shot and few-shot settings.
arXiv Detail & Related papers (2023-08-22T02:25:04Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - GenRec: Large Language Model for Generative Recommendation [41.22833600362077]
This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data.
GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation.
Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems.
arXiv Detail & Related papers (2023-07-02T02:37:07Z) - Large Language Models are Zero-Shot Rankers for Recommender Systems [76.02500186203929]
This work aims to investigate the capacity of large language models (LLMs) to act as the ranking model for recommender systems.
We show that LLMs have promising zero-shot ranking abilities but struggle to perceive the order of historical interactions.
We demonstrate that these issues can be alleviated using specially designed prompting and bootstrapping strategies.
arXiv Detail & Related papers (2023-05-15T17:57:39Z) - PALR: Personalization Aware LLMs for Recommendation [7.407353565043918]
PALR aims to combine user history behaviors (such as clicks, purchases, ratings, etc.) with large language models (LLMs) to generate user preferred items.
Our solution outperforms state-of-the-art models on various sequential recommendation tasks.
arXiv Detail & Related papers (2023-05-12T17:21:33Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.