Using LLMs to Capture Users' Temporal Context for Recommendation
- URL: http://arxiv.org/abs/2508.08512v1
- Date: Mon, 11 Aug 2025 22:48:31 GMT
- Title: Using LLMs to Capture Users' Temporal Context for Recommendation
- Authors: Milad Sabouri, Masoud Mansoury, Kun Lin, Bamshad Mobasher,
- Abstract summary: This paper presents an assessment of Large Language Models (LLMs) for generating semantically rich, time-aware user profiles.<n>We do not propose a novel end-to-end recommendation architecture, but the core contribution is a systematic investigation into the degree of LLM effectiveness.<n>The evaluation across Movies&TV and Video Games domains suggests that while LLM-generated profiles offer semantic depth and temporal structure, their effectiveness for context-aware recommendations is notably contingent on the richness of user interaction histories.
- Score: 3.719862246745416
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Effective recommender systems demand dynamic user understanding, especially in complex, evolving environments. Traditional user profiling often fails to capture the nuanced, temporal contextual factors of user preferences, such as transient short-term interests and enduring long-term tastes. This paper presents an assessment of Large Language Models (LLMs) for generating semantically rich, time-aware user profiles. We do not propose a novel end-to-end recommendation architecture; instead, the core contribution is a systematic investigation into the degree of LLM effectiveness in capturing the dynamics of user context by disentangling short-term and long-term preferences. This approach, framing temporal preferences as dynamic user contexts for recommendations, adaptively fuses these distinct contextual components into comprehensive user embeddings. The evaluation across Movies&TV and Video Games domains suggests that while LLM-generated profiles offer semantic depth and temporal structure, their effectiveness for context-aware recommendations is notably contingent on the richness of user interaction histories. Significant gains are observed in dense domains (e.g., Movies&TV), whereas improvements are less pronounced in sparse environments (e.g., Video Games). This work highlights LLMs' nuanced potential in enhancing user profiling for adaptive, context-aware recommendations, emphasizing the critical role of dataset characteristics for practical applicability.
Related papers
- Towards Realistic Personalization: Evaluating Long-Horizon Preference Following in Personalized User-LLM Interactions [50.70965714314064]
Large Language Models (LLMs) are increasingly serving as personal assistants, where users share complex and diverse preferences over extended interactions.<n>This work proposes RealPref, a benchmark for evaluating realistic preference-following in personalized user-LLM interactions.
arXiv Detail & Related papers (2026-03-04T15:42:43Z) - ALPBench: A Benchmark for Attribution-level Long-term Personal Behavior Understanding [53.88804678012327]
ALPBench is a Benchmark for Attribution-level Long-term Personal Behavior Understanding.<n>It predicts user-interested attribute combinations, enabling ground-truth evaluation.<n>It models preferences from long-term historical behaviors rather than users' explicitly expressed requests.
arXiv Detail & Related papers (2026-02-03T03:32:16Z) - LLM-Enhanced Reinforcement Learning for Long-Term User Satisfaction in Interactive Recommendation [3.247395557141079]
We propose LLM-Enhanced Reinforcement Learning (LERL), a novel hierarchical recommendation framework.<n>LERL consists of a high-level LLM-based planner that selects semantically diverse content categories, and a low-level RL policy that recommends personalized items.<n>LERL significantly improves long-term user satisfaction when compared with state-of-the-art baselines.
arXiv Detail & Related papers (2026-01-27T13:22:30Z) - Effectiveness of LLMs in Temporal User Profiling for Recommendation [2.7543979996398513]
This paper examines the capability of leveraging Large Language Models (LLMs) to capture temporal dynamics.<n>Our observations suggest that while LLMs tend to improve recommendation quality in domains with more active user engagement, their benefits appear less pronounced in sparser environments.
arXiv Detail & Related papers (2025-10-31T18:28:40Z) - Temporal User Profiling with LLMs: Balancing Short-Term and Long-Term Preferences for Recommendations [3.719862246745416]
We propose a novel method for user profiling that explicitly models short-term and long-term preferences.<n>LLM-TUP achieves substantial improvements over several baselines.
arXiv Detail & Related papers (2025-08-11T20:28:24Z) - DUALRec: A Hybrid Sequential and Language Model Framework for Context-Aware Movie Recommendation [6.850757447639822]
Large Language Models (LLMs) have gained gradual attention in recent years, by their strong semantic understanding and reasoning abilities.<n>We proposeRec (Dynamic User-Aware Language-based Recommender), which combines the temporal modelling abilities of LSTM networks with semantic reasoning power of the fine-tuned Large Language Models.
arXiv Detail & Related papers (2025-07-18T14:22:05Z) - Counterfactual Tuning for Temporal Sensitivity Enhancement in Large Language Model-based Recommendation [8.798364656768657]
Existing large language models (LLMs) fail to leverage the rich temporal information inherent in users' historical interaction sequences.<n>We propose Counterfactual Enhanced Temporal Framework for LLM-Based Recommendation (CETRec)<n> CETRec is grounded in causal inference principles, which allow it to isolate and measure the specific impact of temporal information on recommendation outcomes.
arXiv Detail & Related papers (2025-07-03T10:11:35Z) - What Makes LLMs Effective Sequential Recommenders? A Study on Preference Intensity and Temporal Context [56.590259941275434]
RecPO is a preference optimization framework for sequential recommendation.<n>It exploits adaptive reward margins based on inferred preference hierarchies and temporal signals.<n>It mirrors key characteristics of human decision-making: favoring timely satisfaction, maintaining coherent preferences, and exercising discernment under shifting contexts.
arXiv Detail & Related papers (2025-06-02T21:09:29Z) - Multi-agents based User Values Mining for Recommendation [52.26100802380767]
We propose a zero-shot multi-LLM collaborative framework for effective and accurate user value extraction.<n>We apply text summarization techniques to condense item content while preserving essential meaning.<n>To mitigate hallucinations, we introduce two specialized agent roles: evaluators and supervisors.
arXiv Detail & Related papers (2025-05-02T04:01:31Z) - Towards Explainable Temporal User Profiling with LLMs [3.719862246745416]
We leverage large language models (LLMs) to generate natural language summaries of users' interaction histories.<n>Our framework not only models temporal user preferences but also produces natural language profiles that can be used to explain recommendations in an interpretable manner.
arXiv Detail & Related papers (2025-05-01T22:02:46Z) - Unveiling User Preferences: A Knowledge Graph and LLM-Driven Approach for Conversational Recommendation [55.5687800992432]
We propose a plug-and-play framework that synergizes Large Language Models (LLMs) and Knowledge Graphs (KGs) to unveil user preferences.<n>This enables the LLM to transform KG entities into concise natural language descriptions, allowing them to comprehend domain-specific knowledge.
arXiv Detail & Related papers (2024-11-16T11:47:21Z) - Hierarchical Reinforcement Learning for Temporal Abstraction of Listwise Recommendation [51.06031200728449]
We propose a novel framework called mccHRL to provide different levels of temporal abstraction on listwise recommendation.<n>Within the hierarchical framework, the high-level agent studies the evolution of user perception, while the low-level agent produces the item selection policy.<n>Results observe significant performance improvement by our method, compared with several well-known baselines.
arXiv Detail & Related papers (2024-09-11T17:01:06Z) - Beyond Inter-Item Relations: Dynamic Adaption for Enhancing LLM-Based Sequential Recommendation [83.87767101732351]
Sequential recommender systems (SRS) predict the next items that users may prefer based on user historical interaction sequences.
Inspired by the rise of large language models (LLMs) in various AI applications, there is a surge of work on LLM-based SRS.
We propose DARec, a sequential recommendation model built on top of coarse-grained adaption for capturing inter-item relations.
arXiv Detail & Related papers (2024-08-14T10:03:40Z) - Denoising User-aware Memory Network for Recommendation [11.145186013006375]
We propose a novel CTR model named denoising user-aware memory network (DUMN)
DUMN uses the representation of explicit feedback to purify the representation of implicit feedback, and effectively denoise the implicit feedback.
Experiments on two real e-commerce user behavior datasets show that DUMN has a significant improvement over the state-of-the-art baselines.
arXiv Detail & Related papers (2021-07-12T14:39:36Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.