Memory Augmented Neural Model for Incremental Session-based
Recommendation
- URL: http://arxiv.org/abs/2005.01573v1
- Date: Tue, 28 Apr 2020 19:07:20 GMT
- Title: Memory Augmented Neural Model for Incremental Session-based
Recommendation
- Authors: Fei Mi, Boi Faltings
- Abstract summary: We show that existing neural recommenders can be used in incremental Session-based Recommendation scenarios.
We propose a general framework called Memory Augmented Neural model (MAN)
MAN augments a base neural recommender with a continuously queried and updated nonparametric memory.
- Score: 36.33193124174747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Increasing concerns with privacy have stimulated interests in Session-based
Recommendation (SR) using no personal data other than what is observed in the
current browser session. Existing methods are evaluated in static settings
which rarely occur in real-world applications. To better address the dynamic
nature of SR tasks, we study an incremental SR scenario, where new items and
preferences appear continuously. We show that existing neural recommenders can
be used in incremental SR scenarios with small incremental updates to alleviate
computation overhead and catastrophic forgetting. More importantly, we propose
a general framework called Memory Augmented Neural model (MAN). MAN augments a
base neural recommender with a continuously queried and updated nonparametric
memory, and the predictions from the neural and the memory components are
combined through another lightweight gating network. We empirically show that
MAN is well-suited for the incremental SR task, and it consistently outperforms
state-of-the-art neural and nonparametric methods. We analyze the results and
demonstrate that it is particularly good at incrementally learning preferences
on new and infrequent items.
Related papers
- Meta-INR: Efficient Encoding of Volumetric Data via Meta-Learning Implicit Neural Representation [4.782024723712711]
Implicit neural representation (INR) has emerged as a promising solution for encoding volumetric data.
We propose Meta-INR, a pretraining strategy adapted from meta-learning algorithms to learn initial INR parameters from partial observation of a dataset.
We demonstrate that Meta-INR can effectively extract high-quality generalizable features that help encode unseen similar volume data across diverse datasets.
arXiv Detail & Related papers (2025-02-12T21:54:22Z) - Noise-Resilient Symbolic Regression with Dynamic Gating Reinforcement Learning [2.052874815811944]
Symbolic regression has emerged as a pivotal technique for uncovering intrinsic information within data.
Current state-of-the-art (sota) SR methods struggle to perform correct recovery of symbolic expressions from high-noise data.
We introduce a novel noise-resilient SR method capable of recovering expressions from high-noise data.
arXiv Detail & Related papers (2025-01-02T06:05:59Z) - Memory-augmented conformer for improved end-to-end long-form ASR [9.876354589883002]
We propose a memory-augmented neural network between the encoder and decoder of a conformer.
This external memory can enrich the generalization for longer utterances.
We show that the proposed system outperforms the baseline conformer without memory for long utterances.
arXiv Detail & Related papers (2023-09-22T17:44:58Z) - Selective Memory Recursive Least Squares: Recast Forgetting into Memory
in RBF Neural Network Based Real-Time Learning [2.31120983784623]
In radial basis function neural network (RBFNN) based real-time learning tasks, forgetting mechanisms are widely used.
This paper proposes a real-time training method named selective memory recursive least squares (SMRLS) in which the classical forgetting mechanisms are recast into a memory mechanism.
With SMRLS, the input space of the RBFNN is evenly divided into a finite number of partitions and a synthesized objective function is developed using synthesized samples from each partition.
arXiv Detail & Related papers (2022-11-15T05:29:58Z) - Learning Low Dimensional State Spaces with Overparameterized Recurrent
Neural Nets [57.06026574261203]
We provide theoretical evidence for learning low-dimensional state spaces, which can also model long-term memory.
Experiments corroborate our theory, demonstrating extrapolation via learning low-dimensional state spaces with both linear and non-linear RNNs.
arXiv Detail & Related papers (2022-10-25T14:45:15Z) - Deep Impulse Responses: Estimating and Parameterizing Filters with Deep
Networks [76.830358429947]
Impulse response estimation in high noise and in-the-wild settings is a challenging problem.
We propose a novel framework for parameterizing and estimating impulse responses based on recent advances in neural representation learning.
arXiv Detail & Related papers (2022-02-07T18:57:23Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - PredRNN: A Recurrent Neural Network for Spatiotemporal Predictive
Learning [109.84770951839289]
We present PredRNN, a new recurrent network for learning visual dynamics from historical context.
We show that our approach obtains highly competitive results on three standard datasets.
arXiv Detail & Related papers (2021-03-17T08:28:30Z) - On the Curse of Memory in Recurrent Neural Networks: Approximation and Optimization Analysis [30.75240284934018]
We consider the simple but representative setting of using continuous-time linear RNNs to learn from data generated by linear relationships.
We prove a universal approximation theorem of such linear functionals, and characterize the approximation rate and its relation with memory.
A unifying theme uncovered is the non-trivial effect of memory, a notion that can be made precise in our framework.
arXiv Detail & Related papers (2020-09-16T16:48:28Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.