MLSA4Rec: Mamba Combined with Low-Rank Decomposed Self-Attention for Sequential Recommendation
- URL: http://arxiv.org/abs/2407.13135v1
- Date: Thu, 18 Jul 2024 03:46:21 GMT
- Title: MLSA4Rec: Mamba Combined with Low-Rank Decomposed Self-Attention for Sequential Recommendation
- Authors: Jinzhao Su, Zhenhua Huang,
- Abstract summary: This paper proposes a new hybrid recommendation framework, Mamba combined with Low-Rank Self-Attention for Sequential Recommendation (MLSA4Rec)
MLSA4Rec combines user preference information refined by the Mamba and LSA modules to accurately predict the user's next possible interaction.
Experimental results show that MLSA4Rec outperforms existing self-attention and Mamba-based sequential recommendation models in recommendation accuracy on three real-world datasets.
- Score: 4.550290285002704
- License:
- Abstract: In applications such as e-commerce, online education, and streaming services, sequential recommendation systems play a critical role. Despite the excellent performance of self-attention-based sequential recommendation models in capturing dependencies between items in user interaction history, their quadratic complexity and lack of structural bias limit their applicability. Recently, some works have replaced the self-attention module in sequential recommenders with Mamba, which has linear complexity and structural bias. However, these works have not noted the complementarity between the two approaches. To address this issue, this paper proposes a new hybrid recommendation framework, Mamba combined with Low-Rank decomposed Self-Attention for Sequential Recommendation (MLSA4Rec), whose complexity is linear with respect to the length of the user's historical interaction sequence. Specifically, MLSA4Rec designs an efficient Mamba-LSA interaction module. This module introduces a low-rank decomposed self-attention (LSA) module with linear complexity and injects structural bias into it through Mamba. The LSA module analyzes user preferences from a different perspective and dynamically guides Mamba to focus on important information in user historical interactions through a gated information transmission mechanism. Finally, MLSA4Rec combines user preference information refined by the Mamba and LSA modules to accurately predict the user's next possible interaction. To our knowledge, this is the first study to combine Mamba and self-attention in sequential recommendation systems. Experimental results show that MLSA4Rec outperforms existing self-attention and Mamba-based sequential recommendation models in recommendation accuracy on three real-world datasets, demonstrating the great potential of Mamba and self-attention working together.
Related papers
- Mamba-CL: Optimizing Selective State Space Model in Null Space for Continual Learning [54.19222454702032]
Continual Learning aims to equip AI models with the ability to learn a sequence of tasks over time, without forgetting previously learned knowledge.
State Space Models (SSMs) have achieved notable success in computer vision.
We introduce Mamba-CL, a framework that continuously fine-tunes the core SSMs of the large-scale Mamba foundation model.
arXiv Detail & Related papers (2024-11-23T06:36:16Z) - Large Language Model Empowered Embedding Generator for Sequential Recommendation [57.49045064294086]
Large Language Model (LLM) has the potential to understand the semantic connections between items, regardless of their popularity.
We present LLMEmb, an innovative technique that harnesses LLM to create item embeddings that bolster the performance of Sequential Recommender Systems.
arXiv Detail & Related papers (2024-09-30T03:59:06Z) - Bidirectional Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - MaTrRec: Uniting Mamba and Transformer for Sequential Recommendation [6.74321828540424]
Sequential recommendation systems aim to provide personalized recommendations by analyzing dynamic preferences and dependencies within user behavior sequences.
Inspired by the State Space Model (SSM)representative model, Mamba, we find that Mamba's recommendation effectiveness is limited in short interaction sequences.
We propose a new model, MaTrRec, which combines the strengths of Mamba and Transformer.
arXiv Detail & Related papers (2024-07-27T12:07:46Z) - MambaLRP: Explaining Selective State Space Sequence Models [18.133138020777295]
Recent sequence modeling approaches using selective state space sequence models, referred to as Mamba models, have seen a surge of interest.
These models allow efficient processing of long sequences in linear time and are rapidly being adopted in a wide range of applications such as language modeling.
To foster their reliable use in real-world scenarios, it is crucial to augment their transparency.
arXiv Detail & Related papers (2024-06-11T12:15:47Z) - Bi-Mamba+: Bidirectional Mamba for Time Series Forecasting [5.166854384000439]
Long-term time series forecasting (LTSF) provides longer insights into future trends and patterns.
Recently, a new state space model (SSM) named Mamba is proposed.
With the selective capability on input data and the hardware-aware parallel computing algorithm, Mamba has shown great potential in balancing predicting performance and computational efficiency.
arXiv Detail & Related papers (2024-04-24T09:45:48Z) - On Generative Agents in Recommendation [58.42840923200071]
Agent4Rec is a user simulator in recommendation based on Large Language Models.
Each agent interacts with personalized recommender models in a page-by-page manner.
arXiv Detail & Related papers (2023-10-16T06:41:16Z) - Contrastive Self-supervised Sequential Recommendation with Robust
Augmentation [101.25762166231904]
Sequential Recommendationdescribes a set of techniques to model dynamic user behavior in order to predict future interactions in sequential user data.
Old and new issues remain, including data-sparsity and noisy data.
We propose Contrastive Self-Supervised Learning for sequential Recommendation (CoSeRec)
arXiv Detail & Related papers (2021-08-14T07:15:25Z) - Controllable Multi-Interest Framework for Recommendation [64.30030600415654]
We formalize the recommender system as a sequential recommendation problem.
We propose a novel controllable multi-interest framework for the sequential recommendation, called ComiRec.
Our framework has been successfully deployed on the offline Alibaba distributed cloud platform.
arXiv Detail & Related papers (2020-05-19T10:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.