LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation
- URL: http://arxiv.org/abs/2407.02833v1
- Date: Wed, 3 Jul 2024 06:20:31 GMT
- Title: LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation
- Authors: Hongke Zhao, Songming Zheng, Likang Wu, Bowen Yu, Jing Wang,
- Abstract summary: Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
- Score: 15.972926854420619
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The explainability of recommendation systems is crucial for enhancing user trust and satisfaction. Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation. However, in existing related studies, fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems, limiting the application potential of proven proprietary/closed-source LLM models, such as GPT-4. In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning, reducing costs and improving explainability. This innovative approach addresses key challenges in integrating language models with recommendation systems while fully utilizing the capabilities of powerful proprietary models. Specifically, our strategy operates through several key components: semantic embedding, user multi-preference extraction using zero-shot prompting, semantic alignment, and explainable recommendation generation using Chain of Thought (CoT) prompting. By embedding item titles instead of IDs and utilizing multi-head attention mechanisms, our approach aligns the semantic features of user preferences with those of candidate items, ensuring coherent and user-aligned recommendations. Sufficient experimental results including performance comparison, questionnaire voting, and visualization cases prove that our method can not only ensure recommendation performance, but also provide easy-to-understand and reasonable recommendation logic.
Related papers
- MMSRARec: Summarization and Retrieval Augumented Sequential Recommendation Based on Multimodal Large Language Model [18.920729109005435]
This paper proposes MultiModal Summarization-and-Retrieval-Augmented Sequential Recommendation.<n>We first employ MLLM to summarize items into concise keywords and fine-tune the model using rewards that incorporate summary length, information loss, and reconstruction difficulty.<n>Inspired by retrieval-augmented generation, we then transform collaborative signals into corresponding keywords and integrate them as supplementary context.
arXiv Detail & Related papers (2025-12-24T03:44:25Z) - Enhance Large Language Models as Recommendation Systems with Collaborative Filtering [9.697791766151958]
This study proposes critique-based Large Language Models (LLMs) as recommendation systems (Critic-LLM-RS)<n>Critic-LLM-RS implements collaborative filtering for recommendations by learning from the interactions between many users and items.<n>Experiments have verified the effectiveness of Critic-LLM-RS on real datasets.
arXiv Detail & Related papers (2025-10-17T13:35:14Z) - CARE: Contextual Adaptation of Recommenders for LLM-based Conversational Recommendation [66.51329063956538]
We introduce the CARE (Contextual Adaptation of Recommenders) framework.<n> CARE customizes large language models for CRS tasks, and synergizes them with external recommendation systems.<n>Our results demonstrate that incorporating external recommender systems with entity-level information significantly enhances recommendation accuracy of CRS.
arXiv Detail & Related papers (2025-08-19T14:53:30Z) - Towards Comprehensible Recommendation with Large Language Model Fine-tuning [41.218487308635126]
We propose a novel Content Understanding from a Collaborative Perspective framework (CURec) for recommendation systems.<n>Curec generates collaborative-aligned content features for more comprehensive recommendations.<n>Experiments on public benchmarks demonstrate the superiority of CURec over existing methods.
arXiv Detail & Related papers (2025-08-11T03:55:31Z) - A Comprehensive Review on Harnessing Large Language Models to Overcome Recommender System Challenges [5.436611859202691]
Large Language Models (LLMs) can be leveraged to tackle key challenges in recommender systems.<n>LLMs enhance personalization, semantic alignment, and interpretability without requiring extensive task-specific supervision.<n>LLMs enable zero- and few-shot reasoning, allowing systems to operate effectively in cold-start and long-tail scenarios.
arXiv Detail & Related papers (2025-07-17T06:03:57Z) - $\ ext{R}^2\ ext{ec}$: Towards Large Recommender Models with Reasoning [50.291998724376654]
We propose name, a unified large recommender model with intrinsic reasoning capabilities.<n> RecPO is a corresponding reinforcement learning framework that optimize name both the reasoning and recommendation capabilities simultaneously in a single policy update.<n> Experiments on three datasets with various baselines verify the effectiveness of name, showing relative improvements of 68.67% in Hit@5 and 45.21% in NDCG@20.
arXiv Detail & Related papers (2025-05-22T17:55:43Z) - Rethinking LLM-Based Recommendations: A Personalized Query-Driven Parallel Integration [22.650609670923732]
We propose a parallel recommendation framework that decouples large language models from candidate pre-selection.<n>Our framework connects LLMs and recommendation models in a parallel manner, allowing each component to independently utilize its strengths.
arXiv Detail & Related papers (2025-04-16T09:17:45Z) - Large Language Models Are Universal Recommendation Learners [27.16327640562273]
Large language models (LLMs) can function as universal recommendation learners.
We introduce a multimodal fusion module for item representation and a sequence-in-set-out approach for efficient candidate generation.
Our analysis reveals that recommendation outcomes are highly sensitive to text input.
arXiv Detail & Related papers (2025-02-05T09:56:52Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.
We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Semantic Convergence: Harmonizing Recommender Systems via Two-Stage Alignment and Behavioral Semantic Tokenization [10.47505806629852]
Large language models (LLMs) are adept at discerning profound user interests from historical behaviors.
We propose a novel framework that harmoniously merges traditional recommendation models with the prowess of LLMs.
We design a series of specialized supervised learning tasks aimed at aligning collaborative signals with the subtleties of natural language semantics.
arXiv Detail & Related papers (2024-12-18T12:07:58Z) - RLRF4Rec: Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking [33.54698201942643]
Large Language Models (LLMs) have demonstrated remarkable performance across diverse domains.
This paper introduces RLRF4Rec, a novel framework integrating Reinforcement Learning from Recsys Feedback for Enhanced Recommendation Reranking.
arXiv Detail & Related papers (2024-10-08T11:42:37Z) - Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Laser: Parameter-Efficient LLM Bi-Tuning for Sequential Recommendation with Collaborative Information [76.62949982303532]
We propose a parameter-efficient Large Language Model Bi-Tuning framework for sequential recommendation with collaborative information (Laser)
In our Laser, the prefix is utilized to incorporate user-item collaborative information and adapt the LLM to the recommendation task, while the suffix converts the output embeddings of the LLM from the language space to the recommendation space for the follow-up item recommendation.
M-Former is a lightweight MoE-based querying transformer that uses a set of query experts to integrate diverse user-specific collaborative information encoded by frozen ID-based sequential recommender systems.
arXiv Detail & Related papers (2024-09-03T04:55:03Z) - LLM4MSR: An LLM-Enhanced Paradigm for Multi-Scenario Recommendation [52.55639178180821]
The study on multi-scenario recommendation (MSR) has attracted much attention, which uses the data from all scenarios to simultaneously improve their recommendation performance.<n>Existing methods tend to integrate insufficient scenario knowledge and neglect learning personalized cross-scenario preferences, thus leading to sub-optimal performance.<n>We propose a large language model (LLM)-enhanced paradigm LLM4MSR to fill these gaps.
arXiv Detail & Related papers (2024-06-18T11:59:36Z) - XRec: Large Language Models for Explainable Recommendation [5.615321475217167]
We introduce a model-agnostic framework called XRec, which enables Large Language Models to provide explanations for user behaviors in recommender systems.
Our experiments demonstrate XRec's ability to generate comprehensive and meaningful explanations that outperform baseline approaches in explainable recommender systems.
arXiv Detail & Related papers (2024-06-04T14:55:14Z) - Multi-Reference Preference Optimization for Large Language Models [56.84730239046117]
We introduce a novel closed-form formulation for direct preference optimization using multiple reference models.
The resulting algorithm, Multi-Reference Preference Optimization (MRPO), leverages broader prior knowledge from diverse reference models.
Our experiments demonstrate that LLMs finetuned with MRPO generalize better in various preference data, regardless of data scarcity or abundance.
arXiv Detail & Related papers (2024-05-26T00:29:04Z) - Empowering Few-Shot Recommender Systems with Large Language Models --
Enhanced Representations [0.0]
Large language models (LLMs) offer novel insights into tackling the few-shot scenarios encountered by explicit feedback-based recommender systems.
Our study can inspire researchers to delve deeper into the multifaceted dimensions of LLMs's involvement in recommender systems.
arXiv Detail & Related papers (2023-12-21T03:50:09Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.