GOT4Rec: Graph of Thoughts for Sequential Recommendation
- URL: http://arxiv.org/abs/2411.14922v1
- Date: Fri, 22 Nov 2024 13:24:01 GMT
- Title: GOT4Rec: Graph of Thoughts for Sequential Recommendation
- Authors: Zewen Long, Liang Wang, Shu Wu, Qiang Liu, Liang Wang,
- Abstract summary: We propose GOT4Rec, a sequential recommendation method that utilizes the graph of thoughts (GoT) prompting strategy.
We identify and utilize three key types of information within user history sequences: short-term interests, long-term interests and collaborative information from other users.
Extensive experiments on real-world datasets demonstrate the effectiveness of GOT4Rec, indicating that it outperforms existing state-of-the-art baselines.
- Score: 25.03964361177406
- License:
- Abstract: With the advancement of large language models (LLMs), researchers have explored various methods to optimally leverage their comprehension and generation capabilities in sequential recommendation scenarios. However, several challenges persist in this endeavor. Firstly, most existing approaches rely on the input-output prompting paradigm, which can result in irrelevant or inaccurate responses. Secondly, while there have been attempts to enhance LLMs using prompting strategies such as chain-of-thought (CoT), these efforts have not fully harnessed the reasoning abilities of LLMs or effectively captured the multifaceted information contained within user sequences. To address these limitations, we propose GOT4Rec, a sequential recommendation method that utilizes the graph of thoughts (GoT) prompting strategy. Specifically, we identify and utilize three key types of information within user history sequences: short-term interests, long-term interests and collaborative information from other users. Our approach enables LLMs to independently reason and generate recommendations based on these distinct types of information, subsequently aggregating the results within the GoT framework to derive the final recommended items. This method allows LLMs, with enhanced reasoning capabilities, to more effectively consider the diverse information within user sequences, resulting in more accurate recommendations and more comprehensive explanations. Extensive experiments on real-world datasets demonstrate the effectiveness of GOT4Rec, indicating that it outperforms existing state-of-the-art baselines. Our code is available at https://anonymous.4open.science/r/GOT4Rec-ED99.
Related papers
- Large Language Models as Narrative-Driven Recommenders [0.051205673783866146]
Large language models (LLMs) have been shown to excel in processing general natural language queries.
We compare the performance of 38 open- and closed-source LLMs of various sizes in a movie recommendation setting.
Our findings demonstrate the ability of LLMs to generate contextually relevant movie recommendations.
arXiv Detail & Related papers (2024-10-17T14:39:24Z) - Improve Temporal Awareness of LLMs for Sequential Recommendation [61.723928508200196]
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks.
LLMs fall short in recognizing and utilizing temporal information, rendering poor performance in tasks that require an understanding of sequential data.
We propose three prompting strategies to exploit temporal information within historical interactions for LLM-based sequential recommendation.
arXiv Detail & Related papers (2024-05-05T00:21:26Z) - Uncertainty-Aware Explainable Recommendation with Large Language Models [15.229417987212631]
We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
arXiv Detail & Related papers (2024-01-31T14:06:26Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - DRDT: Dynamic Reflection with Divergent Thinking for LLM-based
Sequential Recommendation [53.62727171363384]
We introduce a novel reasoning principle: Dynamic Reflection with Divergent Thinking.
Our methodology is dynamic reflection, a process that emulates human learning through probing, critiquing, and reflecting.
We evaluate our approach on three datasets using six pre-trained LLMs.
arXiv Detail & Related papers (2023-12-18T16:41:22Z) - LlamaRec: Two-Stage Recommendation using Large Language Models for
Ranking [10.671747198171136]
We propose a two-stage framework using large language models for ranking-based recommendation (LlamaRec)
In particular, we use small-scale sequential recommenders to retrieve candidates based on the user interaction history.
LlamaRec consistently achieves datasets superior performance in both recommendation performance and efficiency.
arXiv Detail & Related papers (2023-10-25T06:23:48Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - Zero-Shot Next-Item Recommendation using Large Pretrained Language
Models [16.14557830316297]
We propose a prompting strategy called Zero-Shot Next-Item Recommendation (NIR) prompting that directs LLMs to make next-item recommendations.
Our strategy incorporates a 3-step prompting that guides GPT-3 to carry subtasks that capture the user's preferences.
We evaluate the proposed approach using GPT-3 on MovieLens 100K dataset and show that it achieves strong zero-shot performance.
arXiv Detail & Related papers (2023-04-06T15:35:11Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.