SA-CAISR: Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation
- URL: http://arxiv.org/abs/2602.08678v2
- Date: Wed, 11 Feb 2026 12:06:44 GMT
- Title: SA-CAISR: Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation
- Authors: Xiaomeng Song, Xinru Wang, Hanbing Wang, Hongyu Lu, Yu Chen, Zhaochun Ren, Zhumin Chen,
- Abstract summary: We propose SA-CAISR, a Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation framework.<n>As a buffer-free framework, SA-CAISR operates using only the old model and new data, directly addressing the high costs of replay-based techniques.<n>We show that SA-CAISR improves Recall@20 by 2.0% on average across datasets, while reducing memory usage by 97.5% and training time by 46.9% compared to the best baseline.
- Score: 34.39526892352457
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommendation (SR) aims to predict a user's next action by learning from their historical interaction sequences. In real-world applications, these models require periodic updates to adapt to new interactions and evolving user preferences. While incremental learning methods facilitate these updates, they face significant challenges. Replay-based approaches incur high memory and computational costs, and regularization-based methods often struggle to discard outdated or conflicting knowledge. To overcome these challenges, we propose SA-CAISR, a Stage-Adaptive and Conflict-Aware Incremental Sequential Recommendation framework. As a buffer-free framework, SA-CAISR operates using only the old model and new data, directly addressing the high costs of replay-based techniques. SA-CAISR introduces a novel Fisher-weighted knowledge-screening mechanism that dynamically identifies outdated knowledge by estimating parameter-level conflicts between the old model and new data, selectively removing obsolete knowledge while preserving compatible historical patterns. This dynamic balance between stability and adaptability allows our method to achieve state-of-the-art performance in incremental SR. Specifically, SA-CAISR improves Recall@20 by 2.0% on average across datasets, while reducing memory usage by 97.5% and training time by 46.9% compared to the best baseline. This efficiency allows real-world systems to rapidly update user profiles with minimal computational overhead, ensuring more timely and accurate recommendations.
Related papers
- Retrofit: Continual Learning with Bounded Forgetting for Security Applications [25.185616916987158]
We propose RETROFIT, a data retrospective-free continual learning method that achieves bounded forgetting for effective knowledge transfer.<n>To mitigate interference, we apply low-rank and sparse updates that confine parameter changes to independent subspaces.<n>In malware detection under temporal drift, it substantially improves the retention score, from 20.2% to 38.6% over CL baselines, and exceeds the oracle upper bound on new data.
arXiv Detail & Related papers (2025-11-14T16:07:03Z) - ReLATE+: Unified Framework for Adversarial Attack Detection, Classification, and Resilient Model Selection in Time-Series Classification [9.085996862368576]
Minimizing computational overhead in time-series classification, particularly in deep learning models, presents a significant challenge.<n>We propose ReLATE+, a comprehensive framework that detects and classifies adversarial attacks.<n>We show that ReLATE+ reduces computational overhead by an average of 77.68%, enhancing adversarial resilience and streamlining robust model selection.
arXiv Detail & Related papers (2025-08-26T22:11:50Z) - EKPC: Elastic Knowledge Preservation and Compensation for Class-Incremental Learning [53.88000987041739]
Class-Incremental Learning (CIL) aims to enable AI models to continuously learn from sequentially arriving data of different classes over time.<n>We propose the Elastic Knowledge Preservation and Compensation (EKPC) method, integrating Importance-aware importance Regularization (IPR) and Trainable Semantic Drift Compensation (TSDC) for CIL.
arXiv Detail & Related papers (2025-06-14T05:19:58Z) - Capturing User Interests from Data Streams for Continual Sequential Recommendation [20.994752789028958]
We introduce Continual Sequential Transformer for Recommendation (CSTRec)<n>CSTRec is designed to effectively adapt to current interests by leveraging well-preserved historical ones.<n>CSTRec outperforms state-of-the-art models in both knowledge retention and acquisition.
arXiv Detail & Related papers (2025-06-09T06:20:23Z) - Replay to Remember (R2R): An Efficient Uncertainty-driven Unsupervised Continual Learning Framework Using Generative Replay [1.5267291767316298]
Continual Learning entails progressively acquiring knowledge from new data while retaining previously acquired knowledge.<n>We present a novel uncertainty-driven Unsupervised Continual Learning framework using Generative Replay, namely Replay to Remember (R2R)''<n>Our proposed R2R approach improves knowledge retention, achieving a state-of-the-art performance of 98.13%, 73.06%, 93.41%, 95.18%, 59.74%, respectively.
arXiv Detail & Related papers (2025-05-07T20:29:31Z) - Temporal-Difference Variational Continual Learning [77.92320830700797]
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.<n>Our approach effectively mitigates Catastrophic Forgetting, outperforming strong Variational CL methods.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Towards Robust Continual Learning with Bayesian Adaptive Moment Regularization [51.34904967046097]
Continual learning seeks to overcome the challenge of catastrophic forgetting, where a model forgets previously learnt information.
We introduce a novel prior-based method that better constrains parameter growth, reducing catastrophic forgetting.
Results show that BAdam achieves state-of-the-art performance for prior-based methods on challenging single-headed class-incremental experiments.
arXiv Detail & Related papers (2023-09-15T17:10:51Z) - Augmented Bilinear Network for Incremental Multi-Stock Time-Series
Classification [83.23129279407271]
We propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities.
In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed.
This knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data.
arXiv Detail & Related papers (2022-07-23T18:54:10Z) - Contextual Squeeze-and-Excitation for Efficient Few-Shot Image
Classification [57.36281142038042]
We present a new adaptive block called Contextual Squeeze-and-Excitation (CaSE) that adjusts a pretrained neural network on a new task to significantly improve performance.
We also present a new training protocol based on Coordinate-Descent called UpperCaSE that exploits meta-trained CaSE blocks and fine-tuning routines for efficient adaptation.
arXiv Detail & Related papers (2022-06-20T15:25:08Z) - Towards Lifelong Learning of End-to-end ASR [81.15661413476221]
Lifelong learning aims to enable a machine to sequentially learn new tasks from new datasets describing the changing real world without forgetting the previously learned knowledge.
An overall relative reduction of 28.7% in WER was achieved compared to the fine-tuning baseline when sequentially learning on three very different benchmark corpora.
arXiv Detail & Related papers (2021-04-04T13:48:53Z) - ADER: Adaptively Distilled Exemplar Replay Towards Continual Learning
for Session-based Recommendation [28.22402119581332]
Session-based recommendation has received growing attention recently due to the increasing privacy concern.
We propose a method called Adaptively Distilled Exemplar Replay (ADER) by periodically replaying previous training samples.
ADER consistently outperforms other baselines, and it even outperforms the method using all historical data at every update cycle.
arXiv Detail & Related papers (2020-07-23T13:19:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.