MADREC: A Multi-Aspect Driven LLM Agent for Explainable and Adaptive Recommendation
- URL: http://arxiv.org/abs/2510.13371v1
- Date: Wed, 15 Oct 2025 10:03:29 GMT
- Title: MADREC: A Multi-Aspect Driven LLM Agent for Explainable and Adaptive Recommendation
- Authors: Jiin Park, Misuk Kim,
- Abstract summary: Multi-Aspect Driven LLM Agent MADRec is an autonomous recommender that constructs user and item profiles by unsupervised extraction of multi-aspect information from reviews.<n>MADRec generates structured profiles via aspect-category-based summarization and applies Re-Ranking to construct high-density inputs.<n>Experiments across multiple domains show that MADRec outperforms traditional and LLM-based baselines in both precision and explainability.
- Score: 11.430206422495829
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recent attempts to integrate large language models (LLMs) into recommender systems have gained momentum, but most remain limited to simple text generation or static prompt-based inference, failing to capture the complexity of user preferences and real-world interactions. This study proposes the Multi-Aspect Driven LLM Agent MADRec, an autonomous LLM-based recommender that constructs user and item profiles by unsupervised extraction of multi-aspect information from reviews and performs direct recommendation, sequential recommendation, and explanation generation. MADRec generates structured profiles via aspect-category-based summarization and applies Re-Ranking to construct high-density inputs. When the ground-truth item is missing from the output, the Self-Feedback mechanism dynamically adjusts the inference criteria. Experiments across multiple domains show that MADRec outperforms traditional and LLM-based baselines in both precision and explainability, with human evaluation further confirming the persuasiveness of the generated explanations.
Related papers
- Rethinking On-policy Optimization for Query Augmentation [49.87723664806526]
We present the first systematic comparison of prompting-based and RL-based query augmentation across diverse benchmarks.<n>We introduce a novel hybrid method, On-policy Pseudo-document Query Expansion (OPQE), which learns to generate a pseudo-document that maximizes retrieval performance.
arXiv Detail & Related papers (2025-10-20T04:16:28Z) - AgentDR Dynamic Recommendation with Implicit Item-Item Relations via LLM-based Agents [42.177723613925146]
We propose a novel LLM-agent framework, AgenDR, which bridges LLM reasoning with scalable recommendation tools.<n>Our approach delegates full-ranking tasks to traditional models while utilizing LLMs to integrate multiple recommendation outputs.<n>We show that our framework achieves superior full-ranking performance, yielding on average a twofold improvement over its underlying tools.
arXiv Detail & Related papers (2025-10-07T05:48:05Z) - PrLM: Learning Explicit Reasoning for Personalized RAG via Contrastive Reward Optimization [4.624026598342624]
We propose PrLM, a reinforcement learning framework that trains LLMs to explicitly reason over retrieved user profiles.<n>PrLM effectively learns from user responses without requiring annotated reasoning paths.<n>Experiments on three personalized text generation datasets show that PrLM outperforms existing methods.
arXiv Detail & Related papers (2025-08-10T13:37:26Z) - Retrieval-Augmented Recommendation Explanation Generation with Hierarchical Aggregation [5.656477996187559]
Explainable Recommender System (ExRec) provides transparency to the recommendation process, increasing users' trust and boosting the operation of online services.<n>Existing LLM-based ExRec models suffer from profile deviation and high retrieval overhead, hindering their deployment.<n>We propose Retrieval-Augmented Recommendation Explanation Generation with Hierarchical Aggregation (REXHA)
arXiv Detail & Related papers (2025-07-12T08:15:05Z) - Hierarchical Interaction Summarization and Contrastive Prompting for Explainable Recommendations [9.082885521130617]
We propose a novel approach combining profile generation via hierarchical interaction summarization (PGHIS) with contrastive prompting for explanation generation (CPEG)<n>Our approach outperforms existing state-of-the-art methods, achieving a great improvement on metrics about explainability (e.g., 5% on GPTScore) and text quality.
arXiv Detail & Related papers (2025-07-08T14:45:47Z) - Enhancing Temporal Sensitivity of Large Language Model for Recommendation with Counterfactual Tuning [8.798364656768657]
We propose a framework for underlineRecommendation (CETRec)<n> CETRec is grounded in causal inference principles, which allow it to isolate and measure the specific impact of temporal information on recommendation outcomes.<n>Our code is available at https://anonymous.4open.science/r/CETRec-B9CE/.
arXiv Detail & Related papers (2025-07-03T10:11:35Z) - RALLRec+: Retrieval Augmented Large Language Model Recommendation with Reasoning [22.495874056980824]
We propose Representation learning and textbfReasoning empowered retrieval-textbfAugmented textbfLarge textbfLanguage model textbfRecommendation (RALLRec+).
arXiv Detail & Related papers (2025-03-26T11:03:34Z) - LLM-based Bi-level Multi-interest Learning Framework for Sequential Recommendation [54.396000434574454]
We propose a novel multi-interest SR framework combining implicit behavioral and explicit semantic perspectives.<n>It includes two modules: the Implicit Behavioral Interest Module and the Explicit Semantic Interest Module.<n>Experiments on four real-world datasets validate the framework's effectiveness and practicality.
arXiv Detail & Related papers (2024-11-14T13:00:23Z) - Multi-Reference Preference Optimization for Large Language Models [56.84730239046117]
We introduce a novel closed-form formulation for direct preference optimization using multiple reference models.
The resulting algorithm, Multi-Reference Preference Optimization (MRPO), leverages broader prior knowledge from diverse reference models.
Our experiments demonstrate that LLMs finetuned with MRPO generalize better in various preference data, regardless of data scarcity or abundance.
arXiv Detail & Related papers (2024-05-26T00:29:04Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - ReEval: Automatic Hallucination Evaluation for Retrieval-Augmented Large Language Models via Transferable Adversarial Attacks [91.55895047448249]
This paper presents ReEval, an LLM-based framework using prompt chaining to perturb the original evidence for generating new test cases.
We implement ReEval using ChatGPT and evaluate the resulting variants of two popular open-domain QA datasets.
Our generated data is human-readable and useful to trigger hallucination in large language models.
arXiv Detail & Related papers (2023-10-19T06:37:32Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.