AutoMLP: Automated MLP for Sequential Recommendations
- URL: http://arxiv.org/abs/2303.06337v1
- Date: Sat, 11 Mar 2023 07:50:49 GMT
- Title: AutoMLP: Automated MLP for Sequential Recommendations
- Authors: Muyang Li, Zijian Zhang, Xiangyu Zhao, Wanyu Wang, Minghao Zhao, Runze
Wu, Ruocheng Guo
- Abstract summary: Sequential recommender systems aim to predict users' next interested item given their historical interactions.
Existing approaches usually set pre-defined short-term interest length by exhaustive search or empirical experience.
This paper proposes a novel sequential recommender system, AutoMLP, aiming for better modeling users' long/short-term interests.
- Score: 20.73096302505791
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sequential recommender systems aim to predict users' next interested item
given their historical interactions. However, a long-standing issue is how to
distinguish between users' long/short-term interests, which may be
heterogeneous and contribute differently to the next recommendation. Existing
approaches usually set pre-defined short-term interest length by exhaustive
search or empirical experience, which is either highly inefficient or yields
subpar results. The recent advanced transformer-based models can achieve
state-of-the-art performances despite the aforementioned issue, but they have a
quadratic computational complexity to the length of the input sequence. To this
end, this paper proposes a novel sequential recommender system, AutoMLP, aiming
for better modeling users' long/short-term interests from their historical
interactions. In addition, we design an automated and adaptive search algorithm
for preferable short-term interest length via end-to-end optimization. Through
extensive experiments, we show that AutoMLP has competitive performance against
state-of-the-art methods, while maintaining linear computational complexity.
Related papers
- Bidirectional Gated Mamba for Sequential Recommendation [56.85338055215429]
Mamba, a recent advancement, has exhibited exceptional performance in time series prediction.
We introduce a new framework named Selective Gated Mamba ( SIGMA) for Sequential Recommendation.
Our results indicate that SIGMA outperforms current models on five real-world datasets.
arXiv Detail & Related papers (2024-08-21T09:12:59Z) - Long-Span Question-Answering: Automatic Question Generation and QA-System Ranking via Side-by-Side Evaluation [65.16137964758612]
We explore the use of long-context capabilities in large language models to create synthetic reading comprehension data from entire books.
Our objective is to test the capabilities of LLMs to analyze, understand, and reason over problems that require a detailed comprehension of long spans of text.
arXiv Detail & Related papers (2024-05-31T20:15:10Z) - SA-LSPL:Sequence-Aware Long- and Short- Term Preference Learning for next POI recommendation [19.40796508546581]
Point of Interest (POI) recommendation aims to recommend the POI for users at a specific time.
We propose a novel approach called Sequence-Aware Long- and Short-Term Preference Learning (SA-LSPL) for next-POI recommendation.
arXiv Detail & Related papers (2024-03-30T13:40:25Z) - Graph Based Long-Term And Short-Term Interest Model for Click-Through
Rate Prediction [8.679270588565398]
We propose a Graph based Long-term and Short-term interest Model, termed GLSM.
It consists of a multi-interest graph structure for capturing long-term user behavior, a multi-scenario heterogeneous sequence model for modeling short-term information, then an adaptive fusion mechanism to fused information from long-term and short-term behaviors.
arXiv Detail & Related papers (2023-06-05T07:04:34Z) - IDNP: Interest Dynamics Modeling using Generative Neural Processes for
Sequential Recommendation [40.4445022666304]
We present an textbfInterest textbfDynamics modeling framework using generative textbfNeural textbfProcesses, coined IDNP, to model user interests from a functional perspective.
Our model outperforms state-of-the-arts on various evaluation metrics.
arXiv Detail & Related papers (2022-08-09T08:33:32Z) - Meta-Wrapper: Differentiable Wrapping Operator for User Interest
Selection in CTR Prediction [97.99938802797377]
Click-through rate (CTR) prediction, whose goal is to predict the probability of the user to click on an item, has become increasingly significant in recommender systems.
Recent deep learning models with the ability to automatically extract the user interest from his/her behaviors have achieved great success.
We propose a novel approach under the framework of the wrapper method, which is named Meta-Wrapper.
arXiv Detail & Related papers (2022-06-28T03:28:15Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - MOI-Mixer: Improving MLP-Mixer with Multi Order Interactions in
Sequential Recommendation [40.20599070308035]
Transformer-based models require quadratic memory and time complexity to the sequence length, making it difficult to extract the long-term interest of users.
MLP-based models, renowned for their linear memory and time complexity, have recently shown competitive results compared to Transformer in various tasks.
We propose the Multi-Order Interaction layer, which is capable of expressing an arbitrary order of interactions while maintaining the memory and time complexity of the layer.
arXiv Detail & Related papers (2021-08-17T08:38:49Z) - Context-aware short-term interest first model for session-based
recommendation [0.0]
We propose a context-aware short-term interest first model (CASIF)
The aim of this paper is improve the accuracy of recommendations by combining context and short-term interest.
In the end, the short-term and long-term interest are combined as the final interest and multiplied by the candidate vector to obtain the recommendation probability.
arXiv Detail & Related papers (2021-03-29T11:36:00Z) - Online Model Selection for Reinforcement Learning with Function
Approximation [50.008542459050155]
We present a meta-algorithm that adapts to the optimal complexity with $tildeO(L5/6 T2/3)$ regret.
We also show that the meta-algorithm automatically admits significantly improved instance-dependent regret bounds.
arXiv Detail & Related papers (2020-11-19T10:00:54Z) - Sequential Recommender via Time-aware Attentive Memory Network [67.26862011527986]
We propose a temporal gating methodology to improve attention mechanism and recurrent units.
We also propose a Multi-hop Time-aware Attentive Memory network to integrate long-term and short-term preferences.
Our approach is scalable for candidate retrieval tasks and can be viewed as a non-linear generalization of latent factorization for dot-product based Top-K recommendation.
arXiv Detail & Related papers (2020-05-18T11:29:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.