Incorporate LLMs with Influential Recommender System
- URL: http://arxiv.org/abs/2409.04827v1
- Date: Sat, 07 Sep 2024 13:41:37 GMT
- Title: Incorporate LLMs with Influential Recommender System
- Authors: Mingze Wang, Shuxian Bi, Wenjie Wang, Chongming Gao, Yangyang Li, Fuli Feng,
- Abstract summary: proactive recommender systems recommend a sequence of items to guide user interest in the target item.
Existing methods struggle to construct a coherent influence path that builds up with items the user is likely to enjoy.
We introduce a novel approach named LLM-based Influence Path Planning (LLM-IPP)
Our approach maintains coherence between consecutive recommendations and enhances user acceptability of the recommended items.
- Score: 34.5820082133773
- License:
- Abstract: Recommender systems have achieved increasing accuracy over the years. However, this precision often leads users to narrow their interests, resulting in issues such as limited diversity and the creation of echo chambers. Current research addresses these challenges through proactive recommender systems by recommending a sequence of items (called influence path) to guide user interest in the target item. However, existing methods struggle to construct a coherent influence path that builds up with items the user is likely to enjoy. In this paper, we leverage the Large Language Model's (LLMs) exceptional ability for path planning and instruction following, introducing a novel approach named LLM-based Influence Path Planning (LLM-IPP). Our approach maintains coherence between consecutive recommendations and enhances user acceptability of the recommended items. To evaluate LLM-IPP, we implement various user simulators and metrics to measure user acceptability and path coherence. Experimental results demonstrate that LLM-IPP significantly outperforms traditional proactive recommender systems. This study pioneers the integration of LLMs into proactive recommender systems, offering a reliable and user-engaging methodology for future recommendation technologies.
Related papers
- Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - All Roads Lead to Rome: Unveiling the Trajectory of Recommender Systems Across the LLM Era [63.649070507815715]
We aim to integrate recommender systems into a broader picture, and pave the way for more comprehensive solutions for future research.
We identify two evolution paths of modern recommender systems -- via list-wise recommendation and conversational recommendation.
We point out that the information effectiveness of the recommendation is increased, while the user's acquisition cost is decreased.
arXiv Detail & Related papers (2024-07-14T05:02:21Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers [29.739736497044664]
We present a training-free approach for optimizing generative recommenders.
We propose a generative explore-exploit method that can not only exploit generated items with high engagement, but also actively explore and discover hidden population preferences.
arXiv Detail & Related papers (2024-06-07T20:41:59Z) - Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations [0.0]
Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
arXiv Detail & Related papers (2023-12-21T03:50:09Z) - DRDT: Dynamic Reflection with Divergent Thinking for LLM-based
Sequential Recommendation [53.62727171363384]
We introduce a novel reasoning principle: Dynamic Reflection with Divergent Thinking.
Our methodology is dynamic reflection, a process that emulates human learning through probing, critiquing, and reflecting.
We evaluate our approach on three datasets using six pre-trained LLMs.
arXiv Detail & Related papers (2023-12-18T16:41:22Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Fisher-Weighted Merge of Contrastive Learning Models in Sequential
Recommendation [0.0]
We are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it.
We demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
arXiv Detail & Related papers (2023-07-05T05:58:56Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.