Learning to Shop Like Humans: A Review-driven Retrieval-Augmented Recommendation Framework with LLMs
- URL: http://arxiv.org/abs/2509.00698v1
- Date: Sun, 31 Aug 2025 04:37:43 GMT
- Title: Learning to Shop Like Humans: A Review-driven Retrieval-Augmented Recommendation Framework with LLMs
- Authors: Kaiwen Wei, Jinpeng Gao, Jiang Zhong, Yuming Yang, Fengmao Lv, Zhenyang Li,
- Abstract summary: RevBrowse is a review-driven recommendation framework inspired by the "browse-then-decide" decision process.<n>RevBrowse integrates user reviews into the LLM-based reranking process to enhance its ability to distinguish between candidate items.<n>PrefRAG is a retrieval-augmented module that disentangles user and item representations into structured forms.
- Score: 30.748667156183004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large language models (LLMs) have shown strong potential in recommendation tasks due to their strengths in language understanding, reasoning and knowledge integration. These capabilities are especially beneficial for review-based recommendation, which relies on semantically rich user-generated texts to reveal fine-grained user preferences and item attributes. However, effectively incorporating reviews into LLM-based recommendation remains challenging due to (1) inefficient to dynamically utilize user reviews under LLMs' constrained context windows, and (2) lacking effective mechanisms to prioritize reviews most relevant to the user's current decision context. To address these challenges, we propose RevBrowse, a review-driven recommendation framework inspired by the "browse-then-decide" decision process commonly observed in online user behavior. RevBrowse integrates user reviews into the LLM-based reranking process to enhance its ability to distinguish between candidate items. To improve the relevance and efficiency of review usage, we introduce PrefRAG, a retrieval-augmented module that disentangles user and item representations into structured forms and adaptively retrieves preference-relevant content conditioned on the target item. Extensive experiments on four Amazon review datasets demonstrate that RevBrowse achieves consistent and significant improvements over strong baselines, highlighting its generalizability and effectiveness in modeling dynamic user preferences. Furthermore, since the retrieval-augmented process is transparent, RevBrowse offers a certain level of interpretability by making visible which reviews influence the final recommendation.
Related papers
- Tree of Preferences for Diversified Recommendation [54.183647833064136]
We study diversified recommendation from a data-bias perspective.<n>Inspired by the outstanding performance of large language models (LLMs) in zero-shot inference leveraging world knowledge, we propose a novel approach.
arXiv Detail & Related papers (2025-12-24T04:13:17Z) - Do Reviews Matter for Recommendations in the Era of Large Language Models? [8.772803183525284]
With the advent of large language models (LLMs), the landscape of recommender systems is undergoing a significant transformation.<n>Traditionally, user reviews have served as a critical source of rich, contextual information for enhancing recommendation quality.<n>This paper provides a systematic investigation of the evolving role of text reviews in recommendation by comparing deep learning methods and LLM approaches.
arXiv Detail & Related papers (2025-12-15T04:46:48Z) - MGFRec: Towards Reinforced Reasoning Recommendation with Multiple Groundings and Feedback [62.59727494001646]
We propose performing multiple rounds of grounding during inference to help the LLM better understand the actual item space.<n> Comprehensive experiments conducted on three Amazon review datasets demonstrate the effectiveness of incorporating multiple groundings and feedback.
arXiv Detail & Related papers (2025-10-27T00:41:07Z) - Retrieval-Augmented Recommendation Explanation Generation with Hierarchical Aggregation [5.656477996187559]
Explainable Recommender System (ExRec) provides transparency to the recommendation process, increasing users' trust and boosting the operation of online services.<n>Existing LLM-based ExRec models suffer from profile deviation and high retrieval overhead, hindering their deployment.<n>We propose Retrieval-Augmented Recommendation Explanation Generation with Hierarchical Aggregation (REXHA)
arXiv Detail & Related papers (2025-07-12T08:15:05Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - LLM-based User Profile Management for Recommender System [15.854727020186408]
PURE builds and maintains evolving user profiles by systematically extracting and summarizing key information from user reviews.<n>We introduce a continuous sequential recommendation task that reflects real-world scenarios by adding reviews over time and updating predictions incrementally.<n>Our experimental results on Amazon datasets demonstrate that PURE outperforms existing LLM-based methods.
arXiv Detail & Related papers (2025-02-20T13:20:19Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.<n>We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Reasoning over User Preferences: Knowledge Graph-Augmented LLMs for Explainable Conversational Recommendations [58.61021630938566]
Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues.<n>Current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability.<n>We propose a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - MoRE: A Mixture of Reflectors Framework for Large Language Model-Based Sequential Recommendation [16.10791252542592]
Large language models (LLMs) have emerged as a cutting-edge approach in sequential recommendation.<n>We propose MoRE, which introduces three perspective-aware offline reflection processes to address these gaps.<n>MoRE's meta-reflector employs a self-improving strategy and a dynamic selection mechanism to adapt to evolving user preferences.
arXiv Detail & Related papers (2024-09-10T09:58:55Z) - WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback [36.06000681394939]
We introduce WildFeedback, a novel framework that leverages in-situ user feedback during conversations with large language models (LLMs) to create preference datasets automatically.<n>Our experiments demonstrate that LLMs fine-tuned on WildFeedback dataset exhibit significantly improved alignment with user preferences.
arXiv Detail & Related papers (2024-08-28T05:53:46Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - Reward Constrained Interactive Recommendation with Natural Language
Feedback [158.8095688415973]
We propose a novel constraint-augmented reinforcement learning (RL) framework to efficiently incorporate user preferences over time.
Specifically, we leverage a discriminator to detect recommendations violating user historical preference.
Our proposed framework is general and is further extended to the task of constrained text generation.
arXiv Detail & Related papers (2020-05-04T16:23:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.