Leveraging LLMs for Influence Path Planning in Proactive Recommendation
- URL: http://arxiv.org/abs/2409.04827v2
- Date: Wed, 30 Apr 2025 19:55:56 GMT
- Title: Leveraging LLMs for Influence Path Planning in Proactive Recommendation
- Authors: Mingze Wang, Shuxian Bi, Wenjie Wang, Chongming Gao, Yangyang Li, Fuli Feng,
- Abstract summary: proactive recommender systems aim to guide user interest to gradually like a target item beyond historical interests.<n>IRS designs a sequential model for influence path planning but faces issues of lacking target item inclusion and path coherence.<n>We propose an LLM-based Influence Path Planning (LLM-IPP) method to generate coherent and effective influence paths.
- Score: 34.5820082133773
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems are pivotal in Internet social platforms, yet they often cater to users' historical interests, leading to critical issues like echo chambers. To broaden user horizons, proactive recommender systems aim to guide user interest to gradually like a target item beyond historical interests through an influence path,i.e., a sequence of recommended items. As a representative, Influential Recommender System (IRS) designs a sequential model for influence path planning but faces issues of lacking target item inclusion and path coherence. To address the issues, we leverage the advanced planning capabilities of Large Language Models (LLMs) and propose an LLM-based Influence Path Planning (LLM-IPP) method. LLM-IPP generates coherent and effective influence paths by capturing user interest shifts and item characteristics. We introduce novel evaluation metrics and user simulators to benchmark LLM-IPP against traditional methods. Our experiments demonstrate that LLM-IPP significantly enhances user acceptability and path coherence, outperforming existing approaches.
Related papers
- RecGPT Technical Report [57.84251629878726]
We propose RecGPT, a next-generation framework that places user intent at the center of the recommendation pipeline.<n> RecGPT integrates large language models into key stages of user interest mining, item retrieval, and explanation generation.<n>Online experiments demonstrate that RecGPT achieves consistent performance gains across stakeholders.
arXiv Detail & Related papers (2025-07-30T17:55:06Z) - PGPO: Enhancing Agent Reasoning via Pseudocode-style Planning Guided Preference Optimization [58.465778756331574]
We propose a pseudocode-style Planning Guided Preference Optimization method called PGPO for effective agent learning.<n>With two planning-oriented rewards, PGPO further enhances LLM agents' ability to generate high-quality P-code Plans.<n>Experiments show that PGPO achieves superior performance on representative agent benchmarks and outperforms the current leading baselines.
arXiv Detail & Related papers (2025-06-02T09:35:07Z) - Learning to Reason and Navigate: Parameter Efficient Action Planning with Large Language Models [63.765846080050906]
This paper proposes a novel parameter-efficient action planner using large language models (PEAP-LLM) to generate a single-step instruction at each location.<n>Experiments show the superiority of our proposed model on REVERIE compared to the previous state-of-the-art.
arXiv Detail & Related papers (2025-05-12T12:38:20Z) - User Feedback Alignment for LLM-powered Exploration in Large-scale Recommendation Systems [26.652050105571206]
Exploration, the act of broadening user experiences beyond their established preferences, is challenging in large-scale recommendation systems.
This paper introduces a novel approach combining hierarchical planning with LLM inference-time scaling to improve recommendation relevancy without compromising novelty.
arXiv Detail & Related papers (2025-04-07T21:44:12Z) - Large Language Model driven Policy Exploration for Recommender Systems [50.70228564385797]
offline RL policies trained on static user data are vulnerable to distribution shift when deployed in dynamic online environments.
Online RL-based RS also face challenges in production deployment due to the risks of exposing users to untrained or unstable policies.
Large Language Models (LLMs) offer a promising solution to mimic user objectives and preferences for pre-training policies offline.
We propose an Interaction-Augmented Learned Policy (iALP) that utilizes user preferences distilled from an LLM.
arXiv Detail & Related papers (2025-01-23T16:37:44Z) - RosePO: Aligning LLM-based Recommenders with Human Values [38.029251417802044]
We propose a general framework -- Recommendation with smoothing personalized Preference Optimization (RosePO)
RosePO better aligns with customized human values during the post-training stage.
Evaluation on three real-world datasets demonstrates the effectiveness of our method.
arXiv Detail & Related papers (2024-10-16T12:54:34Z) - Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - All Roads Lead to Rome: Unveiling the Trajectory of Recommender Systems Across the LLM Era [63.649070507815715]
We aim to integrate recommender systems into a broader picture, and pave the way for more comprehensive solutions for future research.
We identify two evolution paths of modern recommender systems -- via list-wise recommendation and conversational recommendation.
We point out that the information effectiveness of the recommendation is increased, while the user's acquisition cost is decreased.
arXiv Detail & Related papers (2024-07-14T05:02:21Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Exploring and Benchmarking the Planning Capabilities of Large Language Models [57.23454975238014]
This work lays the foundations for improving planning capabilities of large language models (LLMs)
We construct a comprehensive benchmark suite encompassing both classical planning benchmarks and natural language scenarios.
We investigate the use of many-shot in-context learning to enhance LLM planning, exploring the relationship between increased context length and improved planning performance.
arXiv Detail & Related papers (2024-06-18T22:57:06Z) - Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers [29.739736497044664]
We present a training-free approach for optimizing generative recommenders.
We propose a generative explore-exploit method that can not only exploit generated items with high engagement, but also actively explore and discover hidden population preferences.
arXiv Detail & Related papers (2024-06-07T20:41:59Z) - LLMs for User Interest Exploration in Large-scale Recommendation Systems [16.954465544444766]
Traditional recommendation systems are subject to a strong feedback loop by learning from and reinforcing past user-item interactions.
We introduce a hybrid hierarchical framework combining Large Language Models (LLMs) and classic recommendation models for user interest exploration.
We showcase the efficacy of this approach on an industrial-scale commercial platform serving billions of users.
arXiv Detail & Related papers (2024-05-25T21:57:36Z) - Improve Temporal Awareness of LLMs for Sequential Recommendation [61.723928508200196]
Large language models (LLMs) have demonstrated impressive zero-shot abilities in solving a wide range of general-purpose tasks.
LLMs fall short in recognizing and utilizing temporal information, rendering poor performance in tasks that require an understanding of sequential data.
We propose three prompting strategies to exploit temporal information within historical interactions for LLM-based sequential recommendation.
arXiv Detail & Related papers (2024-05-05T00:21:26Z) - Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations [0.0]
Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
arXiv Detail & Related papers (2023-12-21T03:50:09Z) - DRDT: Dynamic Reflection with Divergent Thinking for LLM-based
Sequential Recommendation [53.62727171363384]
We introduce a novel reasoning principle: Dynamic Reflection with Divergent Thinking.
Our methodology is dynamic reflection, a process that emulates human learning through probing, critiquing, and reflecting.
We evaluate our approach on three datasets using six pre-trained LLMs.
arXiv Detail & Related papers (2023-12-18T16:41:22Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - Fisher-Weighted Merge of Contrastive Learning Models in Sequential
Recommendation [0.0]
We are the first to apply the Fisher-Merging method to Sequential Recommendation, addressing and resolving practical challenges associated with it.
We demonstrate the effectiveness of our proposed methods, highlighting their potential to advance the state-of-the-art in sequential learning and recommendation systems.
arXiv Detail & Related papers (2023-07-05T05:58:56Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - Influential Recommender System [12.765277278599541]
We present Influential Recommender System (IRS), a new recommendation paradigm that aims to proactively lead a user to like a given objective item.
IRS progressively recommends to the user a sequence of carefully selected items (called an influence path)
We show that IRN significantly outperforms the baseline recommenders and demonstrates its capability of influencing users' interests.
arXiv Detail & Related papers (2022-11-18T03:04:45Z) - Sequential Information Design: Markov Persuasion Process and Its
Efficient Reinforcement Learning [156.5667417159582]
This paper proposes a novel model of sequential information design, namely the Markov persuasion processes (MPPs)
Planning in MPPs faces the unique challenge in finding a signaling policy that is simultaneously persuasive to the myopic receivers and inducing the optimal long-term cumulative utilities of the sender.
We design a provably efficient no-regret learning algorithm, the Optimism-Pessimism Principle for Persuasion Process (OP4), which features a novel combination of both optimism and pessimism principles.
arXiv Detail & Related papers (2022-02-22T05:41:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.