Capturing User Interests from Data Streams for Continual Sequential Recommendation
- URL: http://arxiv.org/abs/2506.07466v2
- Date: Wed, 29 Oct 2025 04:50:52 GMT
- Title: Capturing User Interests from Data Streams for Continual Sequential Recommendation
- Authors: Gyuseok Lee, Hyunsik Yoo, Junyoung Hwang, SeongKu Kang, Hwanjo Yu,
- Abstract summary: We introduce Continual Sequential Transformer for Recommendation (CSTRec)<n>CSTRec is designed to effectively adapt to current interests by leveraging well-preserved historical ones.<n>CSTRec outperforms state-of-the-art models in both knowledge retention and acquisition.
- Score: 20.994752789028958
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Transformer-based sequential recommendation (SR) models excel at modeling long-range dependencies in user behavior via self-attention. However, updating them with continuously arriving behavior sequences incurs high computational costs or leads to catastrophic forgetting. Although continual learning, a standard approach for non-stationary data streams, has recently been applied to recommendation, existing methods gradually forget long-term user preferences and remain underexplored in SR. In this paper, we introduce Continual Sequential Transformer for Recommendation (CSTRec). CSTRec is designed to effectively adapt to current interests by leveraging well-preserved historical ones, thus capturing the trajectory of user interests over time. The core of CSTRec is Continual Sequential Attention (CSA), a linear attention tailored for continual SR, which enables CSTRec to partially retain historical knowledge without direct access to prior data. CSA has two key components: (1) Cauchy-Schwarz Normalization that stabilizes learning over time under uneven user interaction frequencies; (2) Collaborative Interest Enrichment that alleviates forgetting through shared, learnable interest pools. In addition, we introduce a new technique to facilitate the adaptation of new users by transferring historical knowledge from existing users with similar interests. Extensive experiments on three real-world datasets show that CSTRec outperforms state-of-the-art models in both knowledge retention and acquisition.
Related papers
- SA-CAISR: Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation [34.39526892352457]
We propose SA-CAISR, a Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation framework.<n>As a buffer-free framework, SA-CAISR operates using only the old model and new data, directly addressing the high costs of replay-based techniques.<n>We show that SA-CAISR improves Recall@20 by 2.0% on average across datasets, while reducing memory usage by 97.5% and training time by 46.9% compared to the best baseline.
arXiv Detail & Related papers (2026-02-09T14:00:52Z) - Gated Rotary-Enhanced Linear Attention for Long-term Sequential Recommendation [14.581838243440922]
We propose a long-term sequential Recommendation model with Gated Rotary Enhanced Linear Attention (RecGRELA)<n> Specifically, we propose a Rotary-Enhanced Linear Attention (RELA) module to efficiently model long-range dependency.<n>We also introduce a SiLU-based Gated mechanism for RELA to help the model tell if a user behavior shows a short-term, local interest or a real change in their long-term tastes.
arXiv Detail & Related papers (2025-06-16T09:56:10Z) - Test-Time Alignment for Tracking User Interest Shifts in Sequential Recommendation [47.827361176767944]
Sequential recommendation is essential in modern recommender systems, aiming to predict the next item a user may interact with.<n>Real-world scenarios are often dynamic and subject to shifts in user interests.<n>Recent Test-Time Training has emerged as a promising paradigm, enabling pre-trained models to dynamically adapt to test data.<n>We propose T$2$ARec, a novel model leveraging state space model for TTT by introducing two Test-Time Alignment modules tailored for sequential recommendation.
arXiv Detail & Related papers (2025-04-02T08:42:30Z) - Multi-granularity Interest Retrieval and Refinement Network for Long-Term User Behavior Modeling in CTR Prediction [68.90783662117936]
Click-through Rate (CTR) prediction is crucial for online personalization platforms.<n>Recent advancements have shown that modeling rich user behaviors can significantly improve the performance of CTR prediction.<n>We propose Multi-granularity Interest Retrieval and Refinement Network (MIRRN)
arXiv Detail & Related papers (2024-11-22T15:29:05Z) - Continual Task Learning through Adaptive Policy Self-Composition [54.95680427960524]
CompoFormer is a structure-based continual transformer model that adaptively composes previous policies via a meta-policy network.
Our experiments reveal that CompoFormer outperforms conventional continual learning (CL) methods, particularly in longer task sequences.
arXiv Detail & Related papers (2024-11-18T08:20:21Z) - Long-Sequence Recommendation Models Need Decoupled Embeddings [49.410906935283585]
We identify and characterize a neglected deficiency in existing long-sequence recommendation models.<n>A single set of embeddings struggles with learning both attention and representation, leading to interference between these two processes.<n>We propose the Decoupled Attention and Representation Embeddings (DARE) model, where two distinct embedding tables are learned separately to fully decouple attention and representation.
arXiv Detail & Related papers (2024-10-03T15:45:15Z) - i$^2$VAE: Interest Information Augmentation with Variational Regularizers for Cross-Domain Sequential Recommendation [5.300964409946611]
i$2$VAE is a variational autoencoder that enhances user interest learning with mutual information-based regularizers.<n>Experiments demonstrate that i$2$VAE outperforms state-of-the-art methods.
arXiv Detail & Related papers (2024-05-31T09:07:03Z) - Look into the Future: Deep Contextualized Sequential Recommendation [28.726897673576865]
We propose a novel framework of sequential recommendation called Look into the Future (LIFT)
LIFT builds and leverages the contexts of sequential recommendation.
In our experiments, LIFT achieves significant performance improvement on click-through rate prediction and rating prediction tasks.
arXiv Detail & Related papers (2024-05-23T09:34:28Z) - Learning Self-Modulating Attention in Continuous Time Space with
Applications to Sequential Recommendation [102.24108167002252]
We propose a novel attention network, named self-modulating attention, that models the complex and non-linearly evolving dynamic user preferences.
We empirically demonstrate the effectiveness of our method on top-N sequential recommendation tasks, and the results on three large-scale real-world datasets show that our model can achieve state-of-the-art performance.
arXiv Detail & Related papers (2022-03-30T03:54:11Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Improving Sequential Recommendations via Bidirectional Temporal Data Augmentation with Pre-training [46.5064172656298]
We introduce Bidirectional temporal data Augmentation with pre-training (BARec)<n>Our approach leverages bidirectional temporal augmentation and knowledge-enhanced fine-tuning to synthesize authentic pseudo-prior items.<n>Our comprehensive experimental analysis on five benchmark datasets confirms the superiority of BARec across both short and elongated sequence contexts.
arXiv Detail & Related papers (2021-12-13T07:33:28Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Leveraging Historical Interaction Data for Improving Conversational
Recommender System [105.90963882850265]
We propose a novel pre-training approach to integrate item- and attribute-based preference sequence.
Experiment results on two real-world datasets have demonstrated the effectiveness of our approach.
arXiv Detail & Related papers (2020-08-19T03:43:50Z) - ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning
for Session-based Recommendation [28.22402119581332]
Session-based recommendation has received growing attention recently due to the increasing privacy concern.
We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples.
ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.
arXiv Detail & Related papers (2020-07-23T13:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.