Effectiveness of LLMs in Temporal User Profiling for Recommendation
- URL: http://arxiv.org/abs/2511.00176v1
- Date: Fri, 31 Oct 2025 18:28:40 GMT
- Title: Effectiveness of LLMs in Temporal User Profiling for Recommendation
- Authors: Milad Sabouri, Masoud Mansoury, Kun Lin, Bamshad Mobasher,
- Abstract summary: This paper examines the capability of leveraging Large Language Models (LLMs) to capture temporal dynamics.<n>Our observations suggest that while LLMs tend to improve recommendation quality in domains with more active user engagement, their benefits appear less pronounced in sparser environments.
- Score: 2.7543979996398513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effectively modeling the dynamic nature of user preferences is crucial for enhancing recommendation accuracy and fostering transparency in recommender systems. Traditional user profiling often overlooks the distinction between transitory short-term interests and stable long-term preferences. This paper examines the capability of leveraging Large Language Models (LLMs) to capture these temporal dynamics, generating richer user representations through distinct short-term and long-term textual summaries of interaction histories. Our observations suggest that while LLMs tend to improve recommendation quality in domains with more active user engagement, their benefits appear less pronounced in sparser environments. This disparity likely stems from the varying distinguishability of short-term and long-term preferences across domains; the approach shows greater utility where these temporal interests are more clearly separable (e.g., Movies\&TV) compared to domains with more stable user profiles (e.g., Video Games). This highlights a critical trade-off between enhanced performance and computational costs, suggesting context-dependent LLM application. Beyond predictive capability, this LLM-driven approach inherently provides an intrinsic potential for interpretability through its natural language profiles and attention weights. This work contributes insights into the practical capability and inherent interpretability of LLM-driven temporal user profiling, outlining new research directions for developing adaptive and transparent recommender systems.
Related papers
- Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions [50.70965714314064]
Large Language Models (LLMs) are increasingly serving as personal assistants, where users share complex and diverse preferences over extended interactions.<n>This work proposes RealPref, a benchmark for evaluating realistic preference-following in personalized user-LLM interactions.
arXiv Detail & Related papers (2026-03-04T15:42:43Z) - ALPBench: A Benchmark for Attribution-level Long-term Personal Behavior Understanding [53.88804678012327]
ALPBench is a Benchmark for Attribution-level Long-term Personal Behavior Understanding.<n>It predicts user-interested attribute combinations, enabling ground-truth evaluation.<n>It models preferences from long-term historical behaviors rather than users' explicitly expressed requests.
arXiv Detail & Related papers (2026-02-03T03:32:16Z) - RecNet: Self-Evolving Preference Propagation for Agentic Recommender Systems [109.9061591263748]
RecNet is a self-evolving preference propagation framework for recommender systems.<n>It proactively propagates real-time preference updates across related users and items.<n>In the backward phase, the feedback-driven propagation optimization mechanism simulates a multi-agent reinforcement learning framework.
arXiv Detail & Related papers (2026-01-29T12:14:31Z) - LLM-Enhanced Reinforcement Learning for Long-Term User Satisfaction in Interactive Recommendation [3.247395557141079]
We propose LLM-Enhanced Reinforcement Learning (LERL), a novel hierarchical recommendation framework.<n>LERL consists of a high-level LLM-based planner that selects semantically diverse content categories, and a low-level RL policy that recommends personalized items.<n>LERL significantly improves long-term user satisfaction when compared with state-of-the-art baselines.
arXiv Detail & Related papers (2026-01-27T13:22:30Z) - Beyond Naïve Prompting: Strategies for Improved Zero-shot Context-aided Forecasting with LLMs [57.82819770709032]
Large language models (LLMs) can be effective context-aided forecasters via na"ive direct prompting.<n>ReDP improves interpretability by eliciting explicit reasoning traces, allowing us to assess the model's reasoning over the context.<n>CorDP leverages LLMs solely to refine existing forecasts with context, enhancing their applicability in real-world forecasting pipelines.<n> IC-DP proposes embedding historical examples of context-aided forecasting tasks in the prompt, substantially improving accuracy even for the largest models.
arXiv Detail & Related papers (2025-08-13T16:02:55Z) - Using LLMs to Capture Users' Temporal Context for Recommendation [3.719862246745416]
This paper presents an assessment of Large Language Models (LLMs) for generating semantically rich, time-aware user profiles.<n>We do not propose a novel end-to-end recommendation architecture, but the core contribution is a systematic investigation into the degree of LLM effectiveness.<n>The evaluation across Movies&TV and Video Games domains suggests that while LLM-generated profiles offer semantic depth and temporal structure, their effectiveness for context-aware recommendations is notably contingent on the richness of user interaction histories.
arXiv Detail & Related papers (2025-08-11T22:48:31Z) - Temporal User Profiling with LLMs: Balancing Short-Term and Long-Term Preferences for Recommendations [3.719862246745416]
We propose a novel method for user profiling that explicitly models short-term and long-term preferences.<n>LLM-TUP achieves substantial improvements over several baselines.
arXiv Detail & Related papers (2025-08-11T20:28:24Z) - Enhancing Temporal Sensitivity of Large Language Model for Recommendation with Counterfactual Tuning [8.798364656768657]
We propose a framework for underlineRecommendation (CETRec)<n> CETRec is grounded in causal inference principles, which allow it to isolate and measure the specific impact of temporal information on recommendation outcomes.<n>Our code is available at https://anonymous.4open.science/r/CETRec-B9CE/.
arXiv Detail & Related papers (2025-07-03T10:11:35Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Towards Explainable Temporal User Profiling with LLMs [3.719862246745416]
We leverage large language models (LLMs) to generate natural language summaries of users' interaction histories.<n>Our framework not only models temporal user preferences but also produces natural language profiles that can be used to explain recommendations in an interpretable manner.
arXiv Detail & Related papers (2025-05-01T22:02:46Z) - Reasoning over User Preferences: Knowledge Graph-Augmented LLMs for Explainable Conversational Recommendations [58.61021630938566]
Conversational Recommender Systems (CRSs) aim to provide personalized recommendations by capturing user preferences through interactive dialogues.<n>Current CRSs often leverage knowledge graphs (KGs) or language models to extract and represent user preferences as latent vectors, which limits their explainability.<n>We propose a plug-and-play framework that synergizes LLMs and KGs to reason over user preferences, enhancing the performance and explainability of existing CRSs.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.