Efficient Sequential Recommendation for Long Term User Interest Via Personalization
- URL: http://arxiv.org/abs/2601.03479v1
- Date: Wed, 07 Jan 2026 00:15:44 GMT
- Title: Efficient Sequential Recommendation for Long Term User Interest Via Personalization
- Authors: Qiang Zhang, Hanchao Yu, Ivan Ji, Chen Yuan, Yi Zhang, Chihuang Liu, Xiaolong Wang, Christopher E. Lambert, Ren Chen, Chen Kovacs, Xinzhu Bei, Renqin Cai, Rui Li, Lizhu Zhang, Xiangjun Fan, Qunshu Zhang, Benyu Zhang,
- Abstract summary: We introduce a novel approach to sequential recommendation that leverages personalization techniques to enhance efficiency and performance.<n>Our method significantly reduces computational costs while maintaining high recommendation accuracy.<n>Our method could be applied to existing transformer based recommendation models, e.g., HSTU and HLLM.
- Score: 18.326002279321575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed success of sequential modeling, generative recommender, and large language model for recommendation. Though the scaling law has been validated for sequential models, it showed inefficiency in computational capacity when considering real-world applications like recommendation, due to the non-linear(quadratic) increasing nature of the transformer model. To improve the efficiency of the sequential model, we introduced a novel approach to sequential recommendation that leverages personalization techniques to enhance efficiency and performance. Our method compresses long user interaction histories into learnable tokens, which are then combined with recent interactions to generate recommendations. This approach significantly reduces computational costs while maintaining high recommendation accuracy. Our method could be applied to existing transformer based recommendation models, e.g., HSTU and HLLM. Extensive experiments on multiple sequential models demonstrate its versatility and effectiveness. Source code is available at \href{https://github.com/facebookresearch/PerSRec}{https://github.com/facebookresearch/PerSRec}.
Related papers
- HyMiRec: A Hybrid Multi-interest Learning Framework for LLM-based Sequential Recommendation [24.720767926024433]
HyMiRec is a hybrid sequential recommendation framework for large language models.<n>It extracts coarse interest embeddings from long user sequences and an LLM-based recommender to captures refined interest embeddings.<n>To model the diverse preferences of users, we design a disentangled multi-interest learning module.
arXiv Detail & Related papers (2025-10-15T16:45:59Z) - Slow Thinking for Sequential Recommendation [88.46598279655575]
We present a novel slow thinking recommendation model, named STREAM-Rec.<n>Our approach is capable of analyzing historical user behavior, generating a multi-step, deliberative reasoning process, and delivering personalized recommendations.<n>In particular, we focus on two key challenges: (1) identifying the suitable reasoning patterns in recommender systems, and (2) exploring how to effectively stimulate the reasoning capabilities of traditional recommenders.
arXiv Detail & Related papers (2025-04-13T15:53:30Z) - A Novel Mamba-based Sequential Recommendation Method [4.941272356564765]
Sequential recommendation (SR) encodes user activity to predict the next action.<n> Transformer-based models have proven effective for sequential recommendation, but the complexity of the self-attention module in Transformers scales quadratically with the sequence length.<n>We propose a novel multi-head latent Mamba architecture, which employs multiple low-dimensional Mamba layers and fully connected layers.
arXiv Detail & Related papers (2025-04-10T02:43:19Z) - Scaling Sequential Recommendation Models with Transformers [0.0]
We take inspiration from the scaling laws observed in training large language models, and explore similar principles for sequential recommendation.<n> Compute-optimal training is possible but requires a careful analysis of the compute-performance trade-offs specific to the application.<n>We also show that performance scaling translates to downstream tasks by fine-tuning larger pre-trained models on smaller task-specific domains.
arXiv Detail & Related papers (2024-12-10T15:20:56Z) - Scaling New Frontiers: Insights into Large Recommendation Models [74.77410470984168]
Meta's generative recommendation model HSTU illustrates the scaling laws of recommendation systems by expanding parameters to thousands of billions.<n>We conduct comprehensive ablation studies to explore the origins of these scaling laws.<n>We offer insights into future directions for large recommendation models.
arXiv Detail & Related papers (2024-12-01T07:27:20Z) - Bridging User Dynamics: Transforming Sequential Recommendations with Schrödinger Bridge and Diffusion Models [49.458914600467324]
We introduce the Schr"odinger Bridge into diffusion-based sequential recommendation models, creating the SdifRec model.
We also propose an extended version of SdifRec called con-SdifRec, which utilizes user clustering information as a guiding condition.
arXiv Detail & Related papers (2024-08-30T09:10:38Z) - GenRec: Generative Sequential Recommendation with Large Language Models [4.381277509913139]
We propose a novel model named Generative Recommendation (GenRec)
GenRec is lightweight and requires only a few hours to train effectively in low-resource settings.
Our experiments have demonstrated that GenRec generalizes on various public real-world datasets.
arXiv Detail & Related papers (2024-07-30T20:58:36Z) - E^2VPT: An Effective and Efficient Approach for Visual Prompt Tuning [55.50908600818483]
Fine-tuning large-scale pretrained vision models for new tasks has become increasingly parameter-intensive.
We propose an Effective and Efficient Visual Prompt Tuning (E2VPT) approach for large-scale transformer-based model adaptation.
Our approach outperforms several state-of-the-art baselines on two benchmarks.
arXiv Detail & Related papers (2023-07-25T19:03:21Z) - Towards Universal Sequence Representation Learning for Recommender
Systems [98.02154164251846]
We present a novel universal sequence representation learning approach, named UniSRec.
The proposed approach utilizes the associated description text of items to learn transferable representations across different recommendation scenarios.
Our approach can be effectively transferred to new recommendation domains or platforms in a parameter-efficient way.
arXiv Detail & Related papers (2022-06-13T07:21:56Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.